On a Friday afternoon in February 2023, something leaked online that would reshape the entire AI industry. Meta's LLaMA model — a powerful language model that was supposed to be restricted to approved researchers — appeared on 4chan. Within hours, it had spread everywhere. Within weeks, thousands of developers were building on it. Meta had accidentally started a revolution.
The Closed vs. Open Debate
Before LLaMA, the AI landscape was simple: OpenAI and Google had the best models, they kept them locked behind APIs, and everyone else was a customer. The argument for keeping models closed was straightforward — powerful AI could be dangerous, and restricting access kept things safe.
The argument against was equally straightforward: monopolies are bad, and giving two or three companies exclusive control over the most transformative technology since the internet seemed like a terrible idea.
The Open-Source Explosion
After LLaMA leaked, Meta made a strategic decision that surprised everyone: instead of cracking down, they leaned in. Llama 2 was released as an explicitly open model. Llama 3 followed with performance that rivaled GPT-4 on many benchmarks.
The impact was immediate and dramatic:
- Mistral, a tiny French startup, released models that punched far above their weight
- Stability AI open-sourced powerful image generation models
- Hugging Face became the GitHub of AI, hosting thousands of open models
- Researchers in countries without big tech companies could finally do frontier AI work
- Companies could run AI models on their own hardware, keeping sensitive data private
Why Meta Did It
Mark Zuckerberg's reasoning was surprisingly straightforward. Meta doesn't sell AI as a product — it uses AI to improve its advertising and social media platforms. If open-source models become the standard, Meta benefits from the entire community's improvements while its competitors (Google, OpenAI) lose their moat.
It was a brilliant strategic move disguised as generosity. By giving away the model, Meta ensured that no competitor could charge a premium for something similar. The commoditization of AI models benefits everyone except the companies trying to sell them.
The Concerns That Remain
Not everyone celebrates this shift. Open models can be fine-tuned to remove safety guardrails. They can be used by bad actors who would never pass an API provider's terms of service. The counter-argument — that determined bad actors will find ways regardless — doesn't fully address the question of how easy we should make it.
But the genie is out of the bottle. Open-source AI isn't a trend that can be reversed. It's the new baseline against which every closed model must justify its existence.
