Contrary to reports, OpenAI is likely not building AI that threatens humanity

Contrary to reports, OpenAI is likely not building AI that threatens humanity

Image credits: Bryce Durbin/TechCrunch

Has OpenAI invented an AI technology that has the potential to ‘threaten humanity’? From some recent headlines, you might be tempted to think so.

Reuters and Information first reported last week that several OpenAI employees had, in a letter to the AI ​​startup’s board, cited the “ingenuity” and “potential danger” of an internal research project known as “Q*.” According to reports, this AI project could solve some mathematical problems – albeit only at the elementary school level – but in the researchers’ opinion it had a chance of building towards an elusive technical breakthrough.

There is now debate over whether or not OpenAI’s board of directors received such a letter — The Verge cites a source suggesting it did not. But Q* framing aside, Q* actually may not be as formidable—or threatening—as it seems. It may not even be new.

AI researchers at . In a post on .

Many researchers believe that the “Q” in the name “Q*” stands for “Q-learning,” an artificial intelligence technique that helps a model learn and improve at a particular task by taking specific “correct” actions – and being rewarded for them. The asterisk, meanwhile, could be a reference to A*, an algorithm for checking the nodes that make up the graph and exploring paths between these nodes, the researchers say.

They’ve both been around for a while.

Google DeepMind applied Q-learning to build an AI algorithm that could play Atari 2600 games at a human level… in 2014. The origins of A* go back to academic research published in 1968. Researchers at the University of California, Irvine, several years ago exploration Improving A* through Q-learning – which may be exactly what OpenAI is pursuing now.

Nathan Lambert, a research scientist at the Allen Institute for AI, told TechCrunch that he believes Q* is related to AI curricula “mostly (for studying high school math problems) — not the destruction of humanity.”

“OpenAI shared work earlier this year to improve the mathematical reasoning of language models using a technique called process reward models,” Lambert said, “but what remains to be seen is how the powers of mathematics can do anything other than make an AI-powered chatbot.” In OpenAI ChatGPT better code helper.

Mark Riedel, a computer science professor at Georgia Tech, was similarly critical of Reuters and The Information’s reporting about Q* — and the broader media narrative around OpenAI and its pursuit of artificial general intelligence (i.e., AI that can perform any task as well as a human can). Reuters, citing a source, noted that Q* could be a step towards artificial general intelligence (AGI). But researchers – including Riedel – are skeptical about this.

“There is no evidence to suggest that large language models (such as ChatGPT) or any other technology under development at OpenAI are on their way to AGI or any of the doom scenarios,” Riedel told TechCrunch. “OpenAI itself has been at best a ‘quick follower’, having taken existing ideas…and found ways to scale them. While OpenAI hires top-notch researchers, much of what they did could be done by researchers at other organisations. This can also be done if OpenAI researchers work in a different organization.

Riedel, like Lambert, did not speculate whether Q* might entail Q-learning or A*. But if it involves either — or a combination of both — it would be consistent with current trends in AI research, he said.

“These are all ideas that other researchers in academia and industry are pursuing, with dozens of papers on these topics in the past six months or more,” Riedel added. “OpenAI researchers are unlikely to have ideas that a large number of researchers who are also seeking advances in AI do not have.”

That’s not to say that Q* — which reportedly involved Ilya Sutskever, chief scientist at OpenAI — might not move things forward.

If Q* uses some of the techniques described in a paper published by OpenAI researchers in May, it could “significantly” increase the capabilities of language models, Lammers asserts. Based on the paper, OpenAI may have discovered a way to control the “logic chains” of language models, Lammers says — enabling it to direct models to follow more desirable and logically sound “paths” to outcomes.

“This would reduce the possibility that models would follow extraneous human thinking and false patterns to reach harmful or wrong conclusions,” Lammers said. “I think this is actually a win for OpenAI in terms of alignment… Most AI researchers agree that we need better ways to train these large models, so that they can consume information more efficiently

But whatever Q* results in, it – and the relatively simple mathematical equations it solves – will not spell humanity’s doom.



You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *