Latest Research Feed
Constitutional AI: Harmlessness from AI Feedback
We train a non-evasive and harmless AI assistant using constitutional AI, a method for training a harmless AI assistant without human labels.
The Mirroring Effect in Large Language Models
Analysis of how LLMs reflect user bias in long-context windows and the implications for alignment strategies.
GPT-4 Technical Report
We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.
Sparks of Artificial General Intelligence: Early experiments with GPT-4
We contend that GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM) that exhibit more general intelligence than previous AI models.
Stochastic Parrot Archive
foundational papers on the limitations of LLMs as probabilistic pattern matchers.
Access Vault →Alignment & Ethics
Key readings on AI safety, alignment theory, and ethical deployment frameworks.
Access Vault →Open Source Models
Repository of weights, benchmarks, and fine-tuning guides for Llama, Mistral, and Falcon.
Access Vault →Support the Mirror Archive
Independent research relies on community support. Direct funding ensures continued fidelity.
Recommended Tools
- AI Research Assistant Pro - Summarize papers instantly.
- GPU Cloud Compute - Train models efficiently.