- Elon Musk revealed plans to open-source xAI’s ChatGPT challenger, Grok, sometime this week.
- Musk sued OpenAI a few weeks ago due to an alleged breach of its obligations.
- The billionaire also addressed concerns about the possible overreach of AI’s capabilities.
The beef between OpenAI and Elon Musk continues, and the latter is now looking to fulfill the promise that the former failed to deliver. Recently, the key figure in Tesla, SpaceX, and X Corp (Twitter) promised to democratize the technology offered by his xAI startup’s natural language chatbot powered by artificial intelligence (AI), Grok.
Musk said he will roll out the code for Grok this week. With that, the public will be given free rein to experiment with xAI’s technology and maybe find further use cases for it.
This puts in motion the vision that the American entrepreneur shared with Lex Fridman on a podcast in November. During their discussion, he voiced his belief that AI tech should be open-sourced. Of course, he didn’t let pass the chance to take a potshot at OpenAI at that opportune time.
“The name, the ‘open’ in OpenAI, is supposed to mean ‘open source,’ and it was created as a nonprofit open source,” said Musk. “And now it is a closed source for maximum profit.”
Elon Musk v OpenAI, Et Al.
Around two weeks ago, Musk filed a lawsuit against Sam Altman and OpenAI, Inc., including the corporation’s subsidiaries. The richest man in the world claimed in a court filing that the defendants breached the key provisions of the organization’s founding agreement, in which he was initially involved in.
Among them was their failure to make OpenAI’s code free and open to the public. Instead, they entered into an exclusive licensing with Microsoft for their GPT-3 language model and exploited it for profit.
Musk likewise expressed his apprehensions about OpenAI’s next-generation GPT-4 for its “close to human-level performance” while asserting that it exceeds the exclusivity it established with Microsoft. Overall, the 46-page complaint (attachments included) wants the court to compel the AI company to open-source its technology.
AI Concerns
At the AI Safety Summit in November, experts aired their concerns about the prospect of open-sourcing AI. With the rapid advancements in its underlying technology, the benefits could also mean several challenges.
They were worried that a super-advanced AI could be dangerous if it fell into the wrong hands. For instance, it could be utilized by terrorists to make biological weapons, weapons of mass destruction, or even a super-intelligence that could be weaponized and beyond human control.
In response, Musk proposed a “third-party referee” that will regulate AI firms and “sound the alarm” if needed.