The artificial intelligence that powers apps like ChatGPT was revealed by Facebook-owned Meta on Friday. The company claimed it would allow researchers access to the system in order to address any potential risks.
Meta described its own AI, called LLaMA, as a “smaller, more performant” model designed to “help researchers advance their work,” in what could be seen as veiled criticism of Microsoft’s decision to release the technology widely, while keeping the programming code secret.
With the help of technology known as large language models, ChatGPT, which is supported by Microsoft, has taken the world by storm thanks to its capacity to produce expertly crafted texts like essays or poems in a matter of seconds (or LLM).
LLM is a subfield of generative AI, which also includes the ability to implement graphics, designs, or program code almost instantly in response to a straightforward request.
Microsoft, who is typically the more reserved player in big tech, has strengthened its relationship with OpenAI, the company that developed ChatGPT, and earlier this month it revealed that the technology would be incorporated into both its Edge browser and Bing search engine.
A sudden threat to the supremacy of Google’s search engine prompted the company to declare it would soon release its own language AI, known as Bard.
But reports of disturbing exchanges with Microsoft’s Bing chatbot — including it issuing threats and speaking of desires to steal nuclear code or lure one user from his wife — went viral, raising alarm bells that the technology was not ready.
Meta said these problems, sometimes called hallucinations, could be better remedied if researchers had improved access to the expensive technology.
Thorough research “remains limited because of the resources that are required to train and run such large models,” the company said.
This was hindering efforts “to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation,” Meta said.
OpenAI and Microsoft strictly limit access to the technology behind their chatbots, drawing criticism that they are choosing potential profits over improving the technology more quickly for society.
“By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems,” Meta said.