© 2021
In touch with the world ... at home on the High Plains
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations

‘Mitigating the risk of extinction’: As the capabilities of AI grow, so do calls for regulation


An expert explains the risks and regulatory options for AI.

The Center for AI Safety released a statement earlier this week signed by a group of leaders in artificial intelligence about the technology’s risks: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

People are already using AI tools such as ChatGPT to help them write, search for information, and sift through data. And more advanced capabilities are certainly on the way.

The warning is one of several recent calls for greater regulation of AI. Anjana Susarla, the Omura-Saxena Professor in Responsible AI at Michigan State University, spoke to the Texas Standard about what stricter rules for AI might look like. Listen to the story above or read the transcript below.

This transcript has been edited lightly for clarity:

Texas Standard: Let me ask you, does that statement – “mitigating the risk of extinction should be a global priority…” Does that sound hyperbolic to you, that AI is as big of a risk to humankind as a pandemic or nuclear war?

Anjana Susarla: You know, certainly it sounds a bit hyperbolic. I would say we do have to worry a lot about AI risks because we are giving up increasingly so much control of our lives to algorithms and there’s so much automated decision making everywhere. And so that’s a risk in itself.

Could you give us a sense of how this technology could actually harm us and especially on the global scale?

Yes, I think what we don’t realize is so much of business decision making is being done by all these predictive algorithms. That may unintentionally lead to biases in the AI systems – can affect a lot of, you know, how we live our life. There are examples where somebody was wrongly accused of a crime because of facial surveillance. I mean, that’s an extreme example… 

But then again, I mean, forgive me for interrupting here. We’ve heard a lot about these effects that can harm individuals – you know, job losses, being discriminated against because of inferences. It seems like what this report is suggesting is… when you’re talking about the threat of extinction, I mean, are there things that we need to be thinking about on that scale? And what would you do about it to mitigate it? 

Yeah, I think there are some aspects of that report that we should take very, very seriously in the sense that we have these very large models like ChatGPT. Now, is it possible that there’s some crisis to global peace because some rogue regime gets hold of ChatGPT to maybe manufacture like illicit weapons or things like that? Some of those risks are very real. There’s definitely enough, I would say, cybersecurity threats. What happens with all these deepfakes? Will they be used by malicious actors to intervene in elections? I would say countries, in the geopolitical context, what happens with so much disinformation that can get fueled by the AI?

So, for instance, you could have an uprising that triggers a regional conflict which leads to a greater war or something along those lines. Is this a job for Congress? Does there need to be intervention at that level or does there need to be a new regulatory body, perhaps, as we’ve seen with global tribunals or the United Nations or something at that level to address artificial intelligence?

I would say we would need a combination of things. Some types of risks should be addressed, maybe not by creating just a body – Congress doesn’t need to create a new body – but they can be like algorithmic accountability and more data privacy laws, because data privacy is also part of responsible artificial intelligence. The second would be these international tribunals. That is where you need organizations like United Nations. The multilateral cooperation is very necessary. Do we need some kind of international digital watermarking standard, for example, so that you make it possible that when an image is altered and it’s being used for malicious purposes, we can easily establish that this image was altered through AI. So those are all things that we should be very careful about.

How urgent of an issue would you say this is? Do we need to act within months, years? What would you say?

I think that given that we’ll have presidential elections within that timeframe, we have to really be wary about the deep fakes, regulatory impact, or even the global security implications of easy access to generative AI tools.

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.

Copyright 2023 KUT 90.5. To see more, visit KUT 90.5.

Michael Marks