Corollary: "That's why I never give 110%".
Anyone else worried about the Singularity? (Is it that time to talk about it again? No it isn't!) I'm not worried. First off, I see no signs of us even coming close. Secondly, I'm convinced that if it does occur it will be wholly by accident, and without the need for the agency of accelerated super-intelligence.
Currently, the artificial intelligence field is dominated by weak AIs, or as I prefer to call them, Narrow AIs. Programs that perform tasks like voice recognition, or chess playing, or Jeopardy questions do not, as it turns out, require a well-rounded education or a smooth, sophisticated, urbane personality. They do not possess a generalized intelligence that we clever monkeys (perhaps foolishly) pride ourselves on possessing, which is called Strong AI, or Broad AI.
So we are faced with something of a riddle. I know that IQ points as such don't really mean much, but let's assume they do, and let's further assume that your average human possesses 100 IQ points. So, a top of the line model, like Einstein or Newton is around what 250 or so? And even then, that's a narrow IQ. Einstein may have been good at physics, but not everything. Why, I'd be willing to bet he was lousy at building garden sheds, or starting fires rubbing sticks together. In fact, the last time a human animal had to be a really generalist was probably back when we were all hunter/gatherers, back when, if a tool or a weapon broke, you had to fashion a new one from scratch. So we ourselves are to one degree or another, Narrow AIs (just without the artifice part).
So, my suspicion is even though we as a species are good at a lot of things, maybe those things aren't that hard. But then, again, and here's the riddle, we can create things that do not exist in Nature, like H-bombs and nanotechnology, synthetic biology and quantum teleportation. Are those things hard to do? If not then, what could an animal with a thousand IQ points do? A million IQ points? We can't know the answer if those IQ points are general intelligence points, but we already have a good idea if those points are narrow, if they are field specific (keeping in mind we are treating these points as meaning something).
Take Deep Blue. That program was designed with the sole purpose of beating Garry Kasparov in chess. So, hard can chess be? Well, pretty fucking hard, to beat Kasparov. (Although, it turns out, Kasparov was beaten by a bug in the program. On the very first game, Deep Blue made a move towards the end which was tactically illogical, but later analysis by Kasparov and his team revealed a deep strategic move seeing at some 20 moves ahead in the game. This revelation froze Kasparov's heart solid with fear. It screwed up his head. Only later was it found out that the move Deep Blue made was an error, a random default the program would commit if it could find no viable moves).
How about IBM's Watson? Was that program designed to make inferences and connections between categories, to understand sideways references, and engage in both vertical reasoning and lateral imagining, or to win the game of Jeopardy? And as such, IBM's Watson is only good at what you train it intensively on. And that's fine. And it's probably human, given that humans designed it.
But is it Broad AI? No.
We are increasingly entering the world of Narrow AIs. We are also, as a result going to increasingly enter the world of major catastrophes and disasters committed upon our society by Narrow AIs. And if we get through them, the question will become: If we keep having these narrow escapes, how hard can the disasters be? After all, in the Terminator movies, Skynet was defeated by roach technologies wielded by the dregs and remnants of humanity. How hard could it have been to defeat it?
And maybe, it's the really big one where we find out just how smart we are. Or not.
No comments:
Post a Comment