OpenAI creating a new team to control ‘superintelligent’ AI, prevent human extinction


Artificial intelligence leader OpenAI’s co-founder Ilya Sutskever warned that the superintelligence of the technology needs to be controlled so as to prevent the extinction of the human race.

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” wrote Ilya Sutskever and head of alignment Jan Leike in a blog post, stating that they believe such advancements could arrive as soon as this decade.

They said that new institutions for governance will be required for managing such risks and solving the superintelligence alignment’s problem like ensuring AI systems, which are smarter than humans, continue to “follow human intent.”

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs,” the blog post read.

They stated that in order to solve these issues, within a period of four years, they have been leading a new team and dedicating 20 per cent of the computing power, which has been secured till now, for this effort.

“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” they stated.

Aligning superintelligent AI with human intent

The new team, apart from working towards improvising current OpenAI models like ChatGPT and mitigating risks, is also focused on the challenges of machine learning which involves aligning superintelligent AI systems with human intent.

The team’s goal is to devise a roughly human-level automated alignment researcher, with the use of vast amounts of computing to scale it and “iteratively align superintelligence.”

So as to do it, OpenAI will be developing a scalable training method, as well as validating the model achieved as part of its result and then a stress test of its alignment pipeline will be carried out.

WATCH | Will AI run the world better than humans?


“As we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study and develop better alignment techniques than we have now,” read a previous blog post written by Leike and colleagues John Schulman and Jeffrey Wu.

“They will work together with humans to ensure that their own successors are more aligned with humans. . . . Human researchers will focus more and more of their effort on reviewing alignment research done by AI systems instead of generating this research by themselves,” they added. 

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *