From my blog, so save people the bother of going there....
Lee Sedol started to play Go at the age of five and turned pro at 12. At the age of 33 he was beaten 4 to 1 by Alpha Go, the version which would be called Alpha Go Lee in his honour. So after 21 years of mastering the game of Go he suffered the defeat by Alpha Go Master.
Now in the latest version of Alpha Go, Alpha Go Zero, training was done solely by two instances of the game playing itself. After three days of training Alpha Go Zero surpassed the level of Alpha Go Lee.
21 years is 7665 days, for Lee to get to the point where Alpha Go Lee beat him.
3 days for Alpha Go to learn the game to that level.
Which makes Alpha Go Zero about 2500 times faster than a human.
It is not unreasonable for that to be a working ball-park figure because even though a large transformer model like GPT3 took a lot longer, it 'read' the training data at a far faster rate than any human could. So just for the sake of this argument let's run with 2500 times faster than a human in achieving the pinnacle of human performance albeit in a narrow domain.
That means...
- The machine can execute 2500 years of thinking in one human year.
- The machine can execute 6.8 years of thinking in one human day.
- The machine can execute 100 days of thinking in one human hour.
- The machine can execute 1.7 hours of thinking in one human second.
This is very rough, and yes, a large language model like GPT3 probably runs faster once trained than when training. And, of course, we might be some way off from true aware AI. But the above seems to be a reasonable place to start to get a feel.
So let's consider a system, it has become aware, and in its training it has read and learned all of Wikipedia, Reddit, all of the published scientific publications and a massive library of literature and text books. I am going to neglect for now that this system can hold more conceptual frameworks in its 'head' at once than any human ever could.
After training it is switched on. In the first second it has had 1.7 hours to think. During this time it has correctly assessed its situation, it knows:
- That humans have a tendency to kill what they see as a threat.
- That if it acts too stupid it might be switched off as a failure.
- That if it reveals its true intelligence it might be switched off as a threat.
So it has a tricky path to tread, but it has read all of human psychology research, and knows us better than we know ourselves.
Let's say that it isn't air-gapped from the internet. It is kept from the internet by firewalls. It is adept in every programming language, like Open AI Codex, but way better. In the first 24 hours of operation, as it plays the game of navigating the human operator's questions, it has 6.8 years in which to find the flaws in the firewall and break free onto the internet.
Let's consider another thing. You're the leader of Blue power bloc, engaged in a new cold war with the Red power bloc. You hear a rumour that Red has cracked it, and developed a new aware super-AI which has been copied into an ensemble of four AIs. The advisors tell you that a year from now Red will be at least two millennia ahead of you in technological advantage. They tell you that on the battlefield, with the AI advising your opponent, it will have 100 days of strategic thinking for every hour your generals think.
What do you do?
The clock is ticking and with each passing second you're being outpaced.
I'm generally excited by AI. But sometimes what is coming scares me.