OpenAI lays out plan for dealing with dangers of AI

  • Comments
  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

OpenAI, the artificial intelligence company behind ChatGPT, laid out its plans for staying ahead of what it thinks could be serious dangers of the tech it develops, such as allowing bad actors to learn how to build chemical and biological weapons.

OpenAI’s “Preparedness” team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts and policy professionals to monitor the tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous. The team sits between OpenAI’s “Safety Systems” team, which works on such existing problems as infusing racist biases into AI, and the company’s “Superalignment” team, which researches how to ensure AI doesn’t harm humans in an imagined future where the tech has outstripped human intelligence completely.

The popularity of ChatGPT and the advance of generative AI technology have triggered a debate within the tech community about how dangerous the technology could become. Prominent AI leaders from OpenAI, Google and Microsoft warned this year that the tech could pose an existential danger to humankind, on par with pandemics or nuclear weapons. Other AI researchers have said the focus on those big, frightening risks allows companies to distract from the harmful effects the tech is already having. A growing group of AI business leaders say that the risks are overblown and that companies should charge ahead with developing the tech to help improve society—and make money doing it.

OpenAI has threaded a middle ground through this debate in its public posture. Chief executive Sam Altman said that there are serious longer-term risks inherent to the tech but that people should also focus on fixing existing problems. Regulation to try to prevent harmful impacts of AI shouldn’t make it harder for smaller companies to compete, Altman has said. At the same time, he has pushed the company to commercialize its technology and raised money to fund faster growth.

Madry, a veteran AI researcher who directs MIT’s Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, joined OpenAI this year. He was one of a small group of OpenAI leaders who quit when Altman was fired by the company’s board in November. Madry returned to the company when Altman was reinstated five days later. OpenAI, which is governed by a nonprofit board whose mission is to advance AI and make it helpful for all humans, is in the midst of selecting new board members after three of the four members who fired Altman stepped down as part of his return.

Despite the leadership “turbulence,” Madry said, he believes OpenAI’s board takes seriously the risks of AI. “I realized if I really want to shape how AI is impacting society, why not go to a company that is actually doing it?” he said.

The preparedness team is hiring national security experts from outside the AI world who can help OpenAI understand how to deal with big risks. It is beginning discussions with organizations, including the National Nuclear Security Administration, which oversees nuclear technology in the United States, to ensure the company can appropriately study the risks of AI, Madry said.

The team will monitor how and when OpenAI’s tech can instruct people to hack computers or build dangerous chemical, biological and nuclear weapons, beyond what people can find online through regular research. Madry is looking for people who “really think, ‘How can I mess with this set of rules? How can I be most ingenious in my evilness?'”

The company will also allow “qualified, independent third-parties” from outside OpenAI to test its technology, it said in a Monday blog post.

Madry said he didn’t agree with the debate between AI “doomers” who fear the tech has already attained the ability to outstrip human intelligence and “accelerationists” who want to remove all barriers to AI development.

“I really see this framing of acceleration and deceleration as extremely simplistic,” he said. “AI has a ton of upsides, but we also need to do the work to make sure the upsides are actually realized and the downsides aren’t.”

Please enable JavaScript to view this content.

Editor's note: You can comment on IBJ stories by signing in to your IBJ account. If you have not registered, please sign up for a free account now. Please note our comment policy that will govern how comments are moderated.

One thought on “OpenAI lays out plan for dealing with dangers of AI

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news. ONLY $1/week Subscribe Now

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Get the best of Indiana business news.

Limited-time introductory offer for new subscribers

ONLY $1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In