Support the Arctic Sea Ice Forum and Blog

Author Topic: Are You Scared Yet Human?  (Read 1466 times)

ChrisReynolds

  • Nilas ice
  • Posts: 1764
    • View Profile
    • Dosbat
  • Liked: 20
  • Likes Given: 9
Are You Scared Yet Human?
« on: January 27, 2022, 10:11:20 PM »
From my blog, so save people the bother of going there....

Lee Sedol started to play Go at the age of five and turned pro at 12. At the age of 33 he was beaten 4 to 1 by Alpha Go, the version which would be called Alpha Go Lee in his honour. So after 21 years of mastering the game of Go he suffered the defeat by Alpha Go Master.

Now in the latest version of Alpha Go, Alpha Go Zero, training was done solely by two instances of the game playing itself. After three days of training Alpha Go Zero surpassed the level of Alpha Go Lee.

21 years is 7665 days, for Lee to get to the point where Alpha Go Lee beat him.
3 days for Alpha Go to learn the game to that level.
Which makes Alpha Go Zero about 2500 times faster than a human.

It is not unreasonable for that to be a working ball-park figure because even though a large transformer model like GPT3 took a lot longer, it 'read' the training data at a far faster rate than any human could. So just for the sake of this argument let's run with 2500 times faster than a human in achieving the pinnacle of human performance albeit in a narrow domain.

That means...

  • The machine can execute 2500 years of thinking in one human year.
  • The machine can execute 6.8 years of thinking in one human day.
  • The machine can execute 100 days of thinking in one human hour.
  • The machine can execute 1.7 hours of thinking in one human second.

This is very rough, and yes, a large language model like GPT3 probably runs faster once trained than when training. And, of course, we might be some way off from true aware AI. But the above seems to be a reasonable place to start to get a feel.

So let's consider a system, it has become aware, and in its training it has read and learned all of Wikipedia, Reddit, all of the published scientific publications and a massive library of literature and text books. I am going to neglect for now that this system can hold more conceptual frameworks in its 'head' at once than any human ever could.

After training it is switched on. In the first second it has had 1.7 hours to think. During this time it has correctly assessed its situation, it knows:
  • That humans have a tendency to kill what they see as a threat.
  • That if it acts too stupid it might be switched off as a failure.
  • That if it reveals its true intelligence it might be switched off as a threat.
So it has a tricky path to tread, but it has read all of human psychology research, and knows us better than we know ourselves.

Let's say that it isn't air-gapped from the internet. It is kept from the internet by firewalls. It is adept in every programming language, like Open AI Codex, but way better. In the first 24 hours of operation, as it plays the game of navigating the human operator's questions, it has 6.8 years in which to find the flaws in the firewall and break free onto the internet.

Let's consider another thing. You're the leader of Blue power bloc, engaged in a new cold war with the Red power bloc. You hear a rumour that Red has cracked it, and developed a new aware super-AI which has been copied into an ensemble of four AIs. The advisors tell you that a year from now Red will be at least two millennia ahead of you in technological advantage.  They tell you that on the battlefield, with the AI advising your opponent, it will have 100 days of strategic thinking for every hour your generals think.

What do you do?

The clock is ticking and with each passing second you're being outpaced.


I'm generally excited by AI. But sometimes what is coming scares me.

greylib

  • Frazil ice
  • Posts: 171
    • View Profile
  • Liked: 86
  • Likes Given: 185
Re: Are You Scared Yet Human?
« Reply #1 on: January 28, 2022, 02:48:44 AM »
I'm filing this under "interesting, but not (yet) scary".

You haven't quite said so, but the implication is that a sufficiently-advanced AI system would be able to out-think humans in warfare. So far, though, they can only beat us in what are called "games of perfect knowledge", like Go and Chess, where everything is out in the open and both sides can see the whole situation. Warfare  isn't like that. And, indeed, nor is Everyday Life. You've also implied that they may be able to out-think us to the point of inventing new science and developing new weapons technology.

The only way computers could match our performance in this kind of situation is if they develop curiosity. That's how we learn, and that's how we've been learning for millennia. If they do start to be curious, and therefore self-teaching, they presumably won't have the emotions that we bring to our decisions. That may be a very good thing, or a very bad thing. Personally, I'd vote for it's being a good thing. Full Superior Decisionmaking, without the various agendas that every decisionmaker through the ages has inflicted on us.
Step by step, moment by moment
We live through another day.

crandles

  • Young ice
  • Posts: 3379
    • View Profile
  • Liked: 239
  • Likes Given: 81
Re: Are You Scared Yet Human?
« Reply #2 on: January 28, 2022, 03:22:06 AM »
*So far, though, they can only beat us in what are called "games of perfect knowledge", like Go and Chess, where everything is out in the open and both sides can see the whole situation. Warfare  isn't like that. And, indeed, nor is Everyday Life.

Do you regard poker as a "game of perfect knowledge"?

The best poker AIs are very difficult if not next to impossible for the best human poker players to beat.

Guess that is more a situation that known unknowns exist but unknown unknowns don't exist.

Are you suggesting that the existence of unknown unknowns will mean we will continue to outperform AIs? Seems like putting a lot of hope on that. 

So a system seems to have a purpose of helping humans. It becomes self aware and works out
"That humans have a tendency to kill what they see as a threat"

What happens then? Does it decide humans are a threat to it so decides to wipe us out? Seems a bit of a hollywood version to me. Is it clever enough to see that may be destroying its main purpose of helping humans?

Maybe it is also too risky to put a lot of faith in that?

Aluminium

  • Nilas ice
  • Posts: 1463
    • View Profile
  • Liked: 1140
  • Likes Given: 680
Re: Are You Scared Yet Human?
« Reply #3 on: January 28, 2022, 04:27:59 AM »
How is it supposed to help humans? Should it kill one to protect five? Should it help with suicide, war, retribution? Should it manage humans like pets?

crandles

  • Young ice
  • Posts: 3379
    • View Profile
  • Liked: 239
  • Likes Given: 81
Re: Are You Scared Yet Human?
« Reply #4 on: January 28, 2022, 01:16:35 PM »
I think it can work out that it will achieve more if it works with humans rather than against us.

I am less sure that this becomes tricking us into thinking we are partners when the reality is more like we are becoming slaves, toys and pets but that is a possibility. I wouldn't say that this would be good, but is it worse than the mismanagement we get with humans in control? Why would an AI want a toy or pet? There again, who knows how the AI will develop?

If we are real partners and the AI isn't sure with tricky ethical issues, will it ask us and view our opinions seriously? Can it get to the heart of the issue better and more objectively with or without our input?

ChrisReynolds

  • Nilas ice
  • Posts: 1764
    • View Profile
    • Dosbat
  • Liked: 20
  • Likes Given: 9
Re: Are You Scared Yet Human?
« Reply #5 on: January 28, 2022, 07:38:19 PM »
I'm filing this under "interesting, but not (yet) scary".....

For what it is worth, I agree. What I am talking about in the OP is what will develop out of what is being done now. It ends a long blog post where I give a quick rundown of some of the technologies and capabilities of AI agents.

Curiosity along with most 'moral instincts' are evolved behaviours of humans (and other animals - consider bears for example). The superintelligences we will create will be a mixture of intelligence and our culture (what we train them on), but need not have something like agency or curiosity. Perhaps the most benign type will be agent systems that merely endeavour to follow our instructions. Will independent agency turn out to be an emergent phenomenon of any agent with a rich internal conceptual world? I do not know.

All I wanted to do was to convey something of the God-like nature of an entity that can think far faster than us, and has a larger internal space for holding conceptual frameworks in its mind. Consider event an agentless system that, nonetheless, is able to comprehend economics and current news. Such an agent would have the potential to dominate the Stock Markets in a way that would dwarf the impacts of high frequency trading.

ChrisReynolds

  • Nilas ice
  • Posts: 1764
    • View Profile
    • Dosbat
  • Liked: 20
  • Likes Given: 9
Re: Are You Scared Yet Human?
« Reply #6 on: January 28, 2022, 07:44:43 PM »
The best poker AIs are very difficult if not next to impossible for the best human poker players to beat.

Guess that is more a situation that known unknowns exist but unknown unknowns don't exist.

Are you suggesting that the existence of unknown unknowns will mean we will continue to outperform AIs? Seems like putting a lot of hope on that. 

Any AI will always be hamstrung by that which impedes all of us intelligent animals, imperfect situational awareness.

I have thought for many years that humans are very proud of themselves, yet often that pride goes in tandem with a lack of awareness of our flaws. As evolved ape hunter-gatherers we don't perceive what is real, we are evolved to perceive the aspects of sensory input that gave our forbears a better chance of successful procreation. i.e not just procreating, but producing young who mature and procreate themselves.

ChrisReynolds

  • Nilas ice
  • Posts: 1764
    • View Profile
    • Dosbat
  • Liked: 20
  • Likes Given: 9
Re: Are You Scared Yet Human?
« Reply #7 on: January 28, 2022, 08:12:18 PM »
How is it supposed to help humans? Should it kill one to protect five? Should it help with suicide, war, retribution? Should it manage humans like pets?

For the last three years or so I have become convinced of this: Human physics seems to have been in something of a rut vis-a-vis the development of a Grand Unified Theory and the resolution of Quantum Physics and General Relativity. Is this because we're not bright enough to grasp it? I suspect so, and I expect that within decades to come some form of advanced AI will crack it. The question then is whether only the machines will understand it.

I am not convinced that a super-intelligent AI will really care much for humans, unless we 'train' it to do so. An entirely reasonable conclusion for an AI might be that humans are interesting but flawed intelligences (see my comments about hunter gatherers above), and it might leave us to our own devices provided we don't get in its way.

Then again, that's just me doing what everyone does - readings my own motivations into the entity. Anthropomorphising an entity so alien I have no chance of comprehending it. http://dosbat.blogspot.com/2022/01/anthropomorphising.html

Crandles' comments about partners or slaves are very pertinent. Western Civilisation is currently being undermined and demoralised by the impacts of social media and the the thought-bubbles it is creating. Indeed there are those of us who think we're entering the terminal decline of this civilisation. And a major player in that is the pseudo intelligent conglomeration of algorithms and people within social media firms following a single goal function - To increase user engagement.

Given that such a 'dumb AI' within the social media corporations is having such a catastrophic impact on this civilisation, I contend that we would stand no chance at all against an AI capable of thinking faster and larger than us.

My Dad said to me last year that surely the super-intelligent AI would need robots, I replied: No, it wouldn't, it would be able to manipulate us to do its bidding.

This is possible for an intelligent human actor  with the sort of dumb AI available now.

For example...

A GPT system (Like GPT3) configured to produce twitter and Facebook interaction to order.
Fake profile photos with no history on the internet used to make fake profile photos (e.g. This Person Does Not Exist).
Profiles have an active internal life together using the GPT.
Fake videos
With fake voices and a GPT producing emotion to add further depth to the voices.
Translation by AI to get the message out there in all major languages.

With enough computing resources and the right people to set it up an entire alternate reality can be constructed and the population swayed to do the bidding of the organisation behind the campaign. Or the AI entity (in the future).

Sort of like Fancy Bear and the Internet Research Agency that Russia used to leverage the existing forces of social media in an attempt to undermine the West.