While most tech companies aim to sell artificial intelligence, Mark Zuckerberg is taking a different route by offering Meta’s advanced AI model, Llama 3.1, for free. This move places Meta at the forefront of AI development and sparks discussions about the potential risks and benefits of open AI models.
On Monday, Meta unveiled Llama 3.1, its most capable AI model, at no cost. Although it hasn’t disclosed the development costs, Zuckerberg mentioned that the company is investing billions in AI. This release challenges the closed approach of most AI companies and presents a new path for AI progress.
Meta trains Llama to avoid producing harmful output by default, but it can be modified to remove these safeguards. Despite this, it claims that Llama 3.1 is as capable as top commercial models from OpenAI, Google, and Anthropic. According to certain benchmarks, Meta’s model is the smartest AI on Earth.
Percy Liang, a Stanford University professor who tracks open-source AI, highlights the importance of this release. If developers find Llama 3.1 as effective as other top models like OpenAI’s GPT-4, it could shift many users to Meta’s offering. “It will be interesting to see how the usage shifts,” Liang notes.
In an open letter, Zuckerberg compared Llama to the open-source Linux operating system, recalling how big tech companies criticized Linux in the late ’90s and early 2000s. Today, Linux is widely used in cloud computing and powers the Android mobile OS. “I believe that AI will develop in a similar way,” writes Zuckerberg. “Several tech companies are developing leading closed models. But open source is quickly closing the gap.”
While Meta’s decision to give away its AI might seem generous, it also serves the company’s interests. Previous Llama releases have boosted Meta’s influence among AI researchers, developers, and startups. However, Llama 3.1 is not truly open source; Meta imposes usage restrictions, such as limiting the scale of its use in commercial products.
The new version of Llama has 405 billion parameters, or tweakable elements. Meta has already released smaller versions of Llama 3, with 70 billion and 8 billion parameters, respectively. Upgraded versions of these models, branded as Llama 3.1, were also released.
Although Llama 3.1 is too large to run on a regular computer, Meta assures that cloud providers like Databricks, Groq, AWS, and Google Cloud will offer hosting options. Developers can also access custom versions of the model at Meta.ai.
The implications of the new Llama release could be significant. Stella Biderman, executive director of the open-source AI project EleutherAI, notes that Llama 3 is not fully open source. However, Meta’s latest license change allows developers to train their own models using Llama 3, a capability that most AI companies currently prohibit. “This is a really, really big deal,” Biderman says.
Unlike the latest models from OpenAI and Google, Llama is not “multimodal” and cannot handle images, audio, and video. However, Meta claims the model excels at using other software, like web browsers, a feature that many researchers and companies believe could enhance AI’s utility.
The release of ChatGPT by OpenAI in late 2022 raised concerns about the potential misuse and uncontrollability of advanced AI. Although the existential alarm has subsided, experts remain wary of unrestricted AI models being used by hackers or to accelerate the development of biological or chemical weapons. “Cyber criminals everywhere will be delighted,” warns Geoffrey Hinton, a Turing Award winner and AI pioneer.
Hinton, who left Google to speak out about AI risks, emphasizes that AI is fundamentally different from open-source software because models cannot be scrutinized in the same way. “People fine-tune models for their own purposes, and some of those purposes are very bad,” he adds.
Meta has tried to alleviate some fears by cautiously releasing previous versions of Llama. The company subjects Llama to rigorous safety testing and asserts that there is little evidence suggesting its models facilitate weapon development. To enhance safety, Meta has introduced new tools to help developers moderate output and block attempts to bypass restrictions. Meta spokesperson Jon Carvill says the company will decide on a case-by-case basis whether to release future models.
Dan Hendrycks, director of the Center for AI Safety, commends Meta for its thorough testing before releasing models. He believes the new Llama model could assist experts in understanding future risks. “Today’s Llama 3 release will enable researchers outside big tech companies to conduct much-needed AI safety research,” Hendrycks says.
As Meta continues to innovate and push the boundaries of AI development, the debate over the open versus closed approach to AI intensifies. The release of Llama 3.1 marks a significant milestone in this ongoing discussion, showcasing the potential and challenges of making advanced AI accessible to all.
Check for more topics here