The Latest

THE LATEST

THE LATEST THINKING

THE LATEST THINKING

The opinions of THE LATEST’s guest contributors are their own.

The Fear of Stupid Superintelligent AI

Ville Kokko

Posted on October 30, 2021 08:47

2 users

A recurring scenario in fears of how superintelligent AI might run amok seems to postulate that it would still be subhuman in terms of understanding.

The history of "artificial intelligence," or perhaps I should say ideas about AI, has been a long series of underestimations about how hard it would be to create a truly human-level intelligence. It's always just around the corner, until everyone realizes it's not even on the horizon yet, until they get excited again and the cycle begins anew. (A good explanation of this can be found in Melanie Mitchell's Artificial Intelligence: A Guide for Thinking Humans.)

Currently, the world is also being revolutionized by so-called AI. And perhaps it is being revolutionized, at least a little, but what's being called AI just refers to deep-learning algorithms that can do some powerful tricks in specific contexts. They still can't think, although they're getting worryingly better at faking it. (Again, you can see Mitchell's book for more details and evidence.)

Maybe this is why, even among artificial intelligence researchers who expect AI to advance in leaps and bounds, one scenario that keeps coming up is some variant of the following:

For example, asking AI to cure cancer as quickly as possible could be dangerous. “It would probably find ways of inducing tumours in the whole human population, so that it could run millions of experiments in parallel, using all of us as guinea pigs,” said [Professor Stuart] Russell. “And that’s because that’s the solution to the objective we gave it; we just forgot to specify that you can’t use humans as guinea pigs and you can’t use up the whole GDP of the world to run your experiments and you can’t do this and you can’t do that.”

The striking thing to me about this scenario is that it postulates an "intelligence" that basically has no idea what it's doing, although at the same time, it's capable of conducting a hypothetical experiment that would be extremely challenging for humans even if they were willing to do it.

Just about any human being would have the background general understanding that, so to speak, "you can't do this and you can't do that." Stuart Russell is also quoted in the article as saying that the goal should be to have the AI understand that the true goal isn't so fixed, and to ask humans about it - but even this implies, at least ambiguously, that the AI could not think for itself.

The difficulty in producing AI that can do what humans do lies largely in the fact that we have a very general intelligence and use all of it for just about anything, as opposed to algorithms tied to specific contexts. Decent machine translation of literary texts, for example, remains impossible because human writers and translators use their whole intelligence and background knowledge in producing good texts.

Still, I don't think there's anything inherently impossible about true machine intelligence, and perhaps some breakthrough will suddenly happen. In any case, it's very important, as Russell argues, to think about these things as soon as possible - before it's too late.

Ville Kokko

Posted on October 30, 2021 08:47

Comments

comments powered by Disqus
THE LATEST THINKING

Video Site Tour

The Latest
The Latest

Subscribe to THE LATEST Newsletter.

The Latest
The Latest

Share this TLT through...

The Latest