Well if your anxiety plate was not already full, how about a machine driven take over of the world? Unlikely, but yet another dystopian vision of the future that we humans could potentially realize. Yay Us!
“And of course, that’s almost the good news when, with our present all-too-Trumpian world in mind, you begin to think about how Artificial Intelligence might make political and social fools of us all. Given that I’m anything but one of the better-informed people when it comes to AI (though on Less Than Artificial Intelligence I would claim to know a fair amount more), I’m relieved not to be alone in my fears.
In fact, among those who have spoken out fearfully on the subject is the man known as “the godfather of AI,” Geoffrey Hinton, a pioneer in the field of artificial intelligence. He only recently quit his job at Google to express his fears about where we might indeed be heading, artificially speaking. As he told the New York Times recently, “The idea that this stuff could actually get smarter than people — a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Now, he fears not just the coming of killer robots beyond human control but, as he told Geoff Bennett of the PBS NewsHour, “the risk of super intelligent AI taking over control from people… I think it’s an area in which we can actually have international collaboration, because the machines taking over is a threat for everybody. It’s a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.”
1 comment
Comments feed for this article
May 20, 2023 at 8:37 am
tildeb
It’s important to remember that AI in its current iteration is really just text prediction. Sure, we have seen it produce cohesive bits of text, but that has nothing whatsoever to do with fact-checking them. So these programs are not ‘intelligent’ in the sense of making real world connections or being independently creative or even evaluative of what it is producing. They’re just text predictors. And so this makes AI like ChatGPT a very useful tool for dis- and misinformation, which is where the danger lies today.
People who do not know enough to use critical reasoning and Bayesian evaluation about the text and its comprehensive meaning will be easily and consistently fooled and that is where the danger today lies. And it’s a growing danger because we’re not teaching the next generation how to be critical and creative thinkers. We’re so very busy teaching them what to think (relying on group based identities and supposed power hierarchy) and then doubling down on this viewpoint framework to teach how to act on promoting it because it’s supposedly so virtuous.
Education it ain’t.
So the danger from AI grows exponentially as the population becomes ever more dumbed down and authoritarian.
LikeLike