OpenAI CEO Sam Altman has said that now is a good time to start thinking about the governance of superintelligence -- future AI systems dramatically more capable than even artificial generative intelligence (AGI).
Altman stressed that the world must mitigate the risks of today's AI technology too, "but superintelligence will require special treatment and coordination".
"Given the picture as we see it now, it's conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations," he said in a blog post along with other OpenAI leaders.
"Given the possibility of existential risk, we can't just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example," he noted.
Last week, Altman admitted that if generative artificial intelligence (AI) technology goes wrong, it can go quite wrong, as US senators expressed their fears about AI chatbots like ChatGPT.
Altman, who testified at a hearing in the US Senate in Washington, DC, said that the AI industry needs to be regulated by the government as AI becomes "increasingly powerful".
In the blog post, he said that major governments around the world could set up a project that many current efforts become part of, or we could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
"And of course, individual companies should be held to an extremely high standard of acting responsibly. Second, we are likely to eventually need something like an International Atomic Energy Agency (IAEA) for superintelligence efforts," he mentioned.
"Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable," said Altman.