Managing Doomsday Scenarios  

It’s not difficult to conjure up apocalyptic scenarios set in motion by the advancement of artificial intelligence. Humans have been entertained by techno-catastrophe since Mary Shelley published Frankenstein in 1818. From Shelley to Philip K. Dick to The Terminator, the machines-take-over plotline is a proven crowd pleaser.

That’s not to say AI risks should be dismissed as fiction. The philosophical-technological debate over Artificial General Intelligence is serious, and well-informed people disagree in good faith. The Oxford philosopher Nick Bostrom explores the potentially catastrophic implications of AI while technologists like Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, push back against the fear of machine intelligence as a “Frankenstein complex.”

Alan Winfield, board chair of Oxford’s Responsible Technology Institute, offered this insightful analysis of the AI doomsday scenario a few years ago:

“I think we should be a little worried—cautious and prepared may be a better way of putting it—and at the same time a little optimistic…

I don’t believe we need to be obsessively worried by a hypothesized existential risk to humanity.  

Why? Because, for the risk to become real, a sequence of things all need to happen. It’s a sequence of big ifs. If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, either accidentally or maliciously, starts to consume resources, and if we fail to pull the plug then, yes, we may well have a problem. The risk, while not impossible, is improbable.”

 we can reduce risks and maximize benefits

Fortunately, what we face isn’t a binary choice. We don’t have to decide between preventing AI’s low-probability high-risk future nightmare scenarios and creating policies that rectify the high-probability lower-risk problems AI poses right now. We can do both.

The point is not to live our lives in perpetual fear of AI but to take steps to reduce the risks, manage the dangers, and maximize the benefits to society.

At the Transparency Coalition, we are focused on finding pragmatic solutions to the risks and challenges AI presents today. Learn more about those solutions here.

  

Next: 

  • Disclosure: The First Step in AI Transparency

  • Training Data Transparency as a Foundation

Previous
Previous

AI Safeguards: Where to Start

Next
Next

Why ‘Opt-in’ Consent Is the Best Option