In recent developments, OpenAI has disclosed its strategic initiative to address potential threats posed by emergent artificial intelligence technologies. The organization behind the highly popular chatbot, ChatGPT, is setting up a dedicated team to ensure that the rise of AI does not lead to unprecedented negative consequences, which, at their extreme, could include the extinction of humanity.
OpenAI has previously expressed optimism about the transformative potential of AI, suggesting that its power could be harnessed to solve some of the world’s most complex and pressing problems. Moreover, OpenAI leadership has referred to AI as one of the most influential technologies ever created by mankind.
However, OpenAI’s Ilya Sutskever and Jan Leike have now issued a serious caution, pointing out that humans may not yet be adequately equipped to manage technology that surpasses their own intelligence. They argue that while superintelligence currently seems a distant reality, it could potentially become a reality within this decade. This rapidly advancing technology, they warn, carries the inherent risk of becoming uncontrollable, potentially leading to undesirable outcomes.
Despite these concerns, Sutskever and Leike admit that there are no currently existing comprehensive plans or mechanisms to mitigate such an AI-driven crisis. Contemporary techniques for aligning AI, such as reinforcement learning from human feedback, depend primarily on humans’ ability to supervise and control AI. Yet, as AI systems continue to evolve and surpass human intelligence, our capacity to effectively oversee these systems may be compromised.
In response to this challenge, OpenAI has initiated a comprehensive research plan. The company’s co-founder and chief scientist, Mr. Sutskever, and the AI alignment head, Mr. Leike, are gathering a team of leading researchers and engineers tasked with exploring and overcoming the technical challenges presented by superintelligence. A tight deadline of four years has been established for this critical task.
Despite the severity of the potential consequences, the OpenAI leadership remains optimistic about their capacity to navigate these challenges. They note that a number of promising ideas have emerged from preliminary experiments, and that progress metrics and current models can be used to study many of these problems in a hands-on, empirical manner.
OpenAI intends to share its research outcomes widely, recognizing the global importance of their work. The organization is actively recruiting research engineers, scientists, and managers to contribute to this critical mission.
In the political sphere, similar concerns about AI have arisen. Senate Majority Leader Charles E. Schumer has called for regulations to govern the rapidly advancing technology. The Senate Judiciary Committee has been proactive in examining potential AI risks, including those related to cyberattacks, political destabilization, and the deployment of weapons of mass destruction. OpenAI’s CEO, Sam Altman, testified before the Senate Judiciary’s subcommittee on privacy, technology, and the law, expressing his concern about potential misuse of AI tools.
Significant tech industry players, including Google and Microsoft, have also called for new AI regulations. In response, the Biden administration is developing a national AI strategy, aiming for a comprehensive, society-wide approach.
Lastly, OpenAI has announced that GPT-4, their most advanced AI model to date, is now generally available to developers, further increasing the accessibility and impact of this revolutionary technology.