I read a thing about artificial intelligence here and here. And I can't recommend enough for you all to do the same. It took me about 40 minutes to get through it all, mostly with this facial expression:
It's seriously terrifying shit. If you can't be arsed to read it all, it boils down to this:
1. Scientific advancement increases exponentially (the more you know, the easier it is to work shit out). So what we did in the whole of the 20th Century will probably take us, ooh, ten years to do now.
2. AI comes in three types: stupid, human level, and super. So far we only have the stupid kind, that can be really good at one thing and one thing only (think Siri, Google Cars, anything where computers do the thinking for you). But we are in the process of creating human level AI. But...
3. The rules of 'exponentiality' means that as soon as we do, that human AI is likely to turn into Super AI in about an hour, real-time.
4. Super AI is going to make us like ants looking at the Hadron Collider and being totally unable to comprehend what it is, let alone how it was built or what it does.
The smartest people on the planet seem to be in two camps; one side see super AI as the saviour of not just humanity but of the Universe. It will be able to manipulate matter at an atomic (or even sub-atomic) level. It will be able to solve all of the puzzles of physics and the universe, even the ones we don't know about yet. It will be able to make us immortal. And all this in less than fifty years from now (at optimistic estimates).
The other group of super smart people... well, they're not so happy-flappy and ready to welcome the computer deity we'd be creating. They point out that while humans have mushy things like feelings, empathy, morals, ethics, etc, a computer wouldn't. This could be the extinction event for humanity. Maybe all life on earth.
Now, while a part of me is all like "Oh, shit!" another, bigger bit of me is thinking of all the stories I could make out of this. Which probably says a lot about my tendency to retreat into fantasy rather than deal with reality, but is also a demonstration of something I think may be the key to stopping the AI destruction of the world. Make the thing love stories, particularly stories about people and then it will have a good reason to keep us around.
Unless we all end up having to reenact Game of Thrones for the computerised overlord's amusement...
That's a nice thought, but as you point out, there are some stories that are not much fun to be part of. So we'll just teach the AI what sort of stories we like... I strongly recommend reading Yudkowsky's The Hidden Complexity of Wishes (http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/). It's short, and the main point is that "There is no safe wish smaller than an entire human morality."
ReplyDelete(Incidentally, I'm not convinced about exponentiality. It all seems a bit mystical to me. Nonetheless, I do think that AI is a significant risk. Even if it doesn't develop exponentially, it may develop faster than we can handle.)