Hawking warns of the dangers of AI

Technology, computers, sciences, mysteries and phenomena of all kinds, etc., etc. all here at The Loresraat!!

Moderator: Vraith

Post Reply
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Hawking warns of the dangers of AI

Post by peter »

Not for the first time Proffessor Hawking has thrown a cat among the pigeons this week by claiming that humanity may have created it's own nemesis in the form of AI that will rise to a position of dominance on the Earth in the [not so distant] future.

Apparently it all started with Hawkings new 'voice box', that was so advanced it could read his words from facial movements and broadcast them on his speaker. So far so good, but when the machine started actually pre-judging what he was about to say and adjusting it to it's own desired method of communication the Cambridge physicist became concerned and announced that this was indeed likely to be the 'shape of things to come'. He envisages [according to the report I saw] a 'Terminator' style scenario developing where mankind battles it's own creations with much the disadvataged odds of winning. And if we loose, what then?

Well - Hawking loves to stir things up, and who can deny him his fun, but is there a serious warning behind his words. Is there a real risk that we are opening a Pandora's Box unbeknowingly as we speak?

[As an aside Hawking has also suggested that he would like to be considered for a role as the next 'James Bond' villain; he feels he could well do the part justice and I for one love the idea! ;) ]

[Davros from Doctor Who would also be a possibility.]
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
I'm Murrin
Are you?
Posts: 15840
Joined: Tue Apr 08, 2003 1:09 pm
Location: North East, UK
Contact:

Post by I'm Murrin »

If you want a real mindf*ck, look up Roko's Basilisk. It can be hard to wrap your head around, but it shows how weird things can get when you try to speculate about possible future AIs.

I'm way way too uneducated on the subject to really explain it accurately, but here's the gist as I understand it:

- A sufficiently advanced AI, programmed correctly, could solve all of humanity's problems and make a better world.
- A sufficiently advanced AI would be able to perfectly accurately simulate real humans within itself, both those it has encountered first hand, and those it reconstructs from what it knows of them, living or dead. These simulations could be so accurate that they may as well be the original.
- Such an AI created for the purpose of bettering the world will be aware that if it had been created sooner, it would have been able to benefit more people.
- Therefore it may perceive that those who knew this was a possibility but did not act to help it come about (by, for example, donating money to AI research aimed at its creation) have acted to the detriment of society and the AI's purposes.

Here's where it gets weird.

- The AI may choose to punish simulations it creates of those from the past who knew that they could help bring about a benevolent AI and did not act to do so.
- Because these simulations are completely accurate, this is equivalent to that person being punished personally, or at least an identical copy experiencing what is, to the copy, very real punishment.
- Enacting such punishment serves no purpose unless people in the past know that the AI will do so if they do not help bring about it's creation.
- Therefore, simply conceiving of the possibility, and my telling you about this possible future AI right now, actually increases the chance that it will become real.

If this AI is created in the future, it will know that you knew it was possible, and it will know whether you chose to contribute to its creation - and therefore to the solving of all the world's problems - and if you did not, it will create a perfect simulation of you to punish for not doing so. By not contributing, you are causing the suffering of a being that is virtually identical to yourself.

Of course, this whole thing only works if this idea actually convinces a significant number of people to contribute (and if the very unlikely AI actually comes into existence).
User avatar
Vraith
The Gap Into Spam
Posts: 10621
Joined: Fri Nov 21, 2008 8:03 pm
Location: everywhere, all the time

Re: Hawking warns of the dangers of AI

Post by Vraith »

First, Murrin...that person took a nifty twist in thought. I like it.
peter wrote: Is there a real risk that we are opening a Pandora's Box unbeknowingly as we speak?
That's the question, isn't it? The consensus seems to be that we have at least till 2040, probably longer...but not a lot longer...to get a handle on that.
I, for various reasons, expect sooner rather than later.

But...will it be dangerous? Not only does no one know, no one CAN know, I think.
We can't even guess "how" it will think. Not the mechanical how, [though we don't know that either, yet].
Some think different hardware...and all the other differences...mean a different KIND of thought. Something that we can't understand [and it won't really understand our way, either]. Which isn't a complete bar to communication and getting along together, but makes it harder.
Some think the differences at that level may make some quirks, but won't matter overall. It will exist in this universe, and all intelligent thought in a single universe converges, because the big issues/problems/questions and nature of reality/fundamental fabric are the same.
Some think it will will implicitly [if not explicitly] "absorb" some of our humanity [like morals, for instance] because of both the content and context of the knowledge that exists. But then...will it be Mother Theresa, Pol Pot, or de Sade?
----will it realize we are cruel most of the time, and we are competing with it for survival/resources?
----Or will it see how much better we've gotten over time, and ALL of that "getting better" is due to knowledge, and so join with us to create more and better knowledge for all?
[and---hopefully---"join" in a more literal sense. Cyberware, so we can physically be smarter/better friends and partners. Really, I wish/hope we develop the man/machine interface BEFORE a machine is independent intelligence capable.]

Just all kinds of fun stuff.
[spoiler]Sig-man, Libtard, Stupid piece of shit. change your text color to brown. Mr. Reliable, bullshit-slinging liarFucker-user.[/spoiler]
the difference between evidence and sources: whether they come from the horse's mouth or a horse's ass.
"Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation."
the hyperbole is a beauty...for we are then allowed to say a little more than the truth...and language is more efficient when it goes beyond reality than when it stops short of it.
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

What could possibly go wrong when you hand over your responsibility to think?

... to something intended to be a benevolent care-taker ... but which was designed by people, people who had handed over their responsibility to think .. to something which was intended to be a benevolent care-taker ... :twisted:

I'd rather see advances in human intelligence.
.
User avatar
Vraith
The Gap Into Spam
Posts: 10621
Joined: Fri Nov 21, 2008 8:03 pm
Location: everywhere, all the time

Post by Vraith »

wayfriend wrote: I'd rather see advances in human intelligence.
Me, too. But the only way that's going to happen in any noticeable way is chemistry and chips.
[spoiler]Sig-man, Libtard, Stupid piece of shit. change your text color to brown. Mr. Reliable, bullshit-slinging liarFucker-user.[/spoiler]
the difference between evidence and sources: whether they come from the horse's mouth or a horse's ass.
"Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation."
the hyperbole is a beauty...for we are then allowed to say a little more than the truth...and language is more efficient when it goes beyond reality than when it stops short of it.
User avatar
Wildling
Giantfriend
Posts: 317
Joined: Sat May 18, 2013 6:37 pm
Location: The Great White North, eh.

Post by Wildling »

Vraith wrote:
wayfriend wrote: I'd rather see advances in human intelligence.
Me, too. But the only way that's going to happen in any noticeable way is chemistry and chips.
I hope you're talking beer and potato chips because I have little hope for anything actually useful coming out of research facilities.
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

All I know is that I hope I can wetwire a computer into my brain before I die, or at least a USB slot. The future isn't strictly AI but converting ourselves into cyborgs. If we have a USB slot wired into our brain, then we can carry around useful tools like calendar, maps, calculator, etc. without having to carry a smart device in our pocket, we could access useful databases such as the specs of the wetware drive itself (to tweak or repair it on our own with appropriate tools), linguistics databases (paired with an OCR in your prosthetic eye you can read any printed material and have it automatically translated for you), first aid information, or anything else which may be useful. The prosthetic eye I mentioned also doubles as a portable camera and could have a low-light setting for use at night. A set-up like that would make it possible for people to communicate more effectively and explore the world more completely--you can't get lost (on-board maps) and you won't suffer from being illiterate (well, you'll be functionally literate--you could read signs but not speak the language...but the on-board translator could show you what to write down so others could understand you, as well).

Right now, we are close to fully-realized cyborg tech. We have devices which wire into the brain and allow people with mobility issues to remotely control hands, arms, or legs. They may also control a mouse and a keyboard--if you can operate a computer you are about as functional as any other person can be.

I like that thought experiment, though. An AI routine which mirrors my thought process exactly would realize that it is an AI routine because I have played the other thought game of "am I real?" before. It couldn't be made to "suffer", though, because it doesn't have pain receptors even if memories of inflicted pain could be played on repeat. I also doubt that the "master" AI would bother punishing digital copies of anyone because there would be no point or purpose to it.
The Tank is gone and now so am I.
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

Hashi Lebwohl wrote:Right now, we are close to fully-realized cyborg tech.
... and, unfortunately, we are close+1 to fully-realized cyborg hacking. Think about the new meaning that this will give to "zombie".

And we are close+2 to fully realized cyborg brain-vertizing, cyborg brain-cookies, cyborg thought-tracking, and cyborg data mining.

No one is connecting anything to my brain. Uh uh. I've read Interface.
Hashi Lebwohl wrote:I also doubt that the "master" AI would bother punishing digital copies of anyone because there would be no point or purpose to it.
(Many people surmise that the Christian god creates people just to punish them, so there might be an actual reason, that gods are privy to. Just a thought.)
.
User avatar
Vraith
The Gap Into Spam
Posts: 10621
Joined: Fri Nov 21, 2008 8:03 pm
Location: everywhere, all the time

Post by Vraith »

Yea, much of the stuff you mention is cool Hashi. But it's a peripheral...you want it around when you need it, but it's only incremental over the iPhone 6, or 6000 or 10^6th.

I want the stuff that's like extra brain...so you're actually smarter.
It's the difference between LINKING to wikipedia on, for example, "The Chronicles of Thomas Covenant" and reading and understanding them.
I don't want to know everything everyone ever said about General Relativity, I want to COMPREHEND it...and be able to figure out just how freaking wrong it was [or right].

It's related to what I've said elsewhere about folk who say "I still remember that sonnet I had to memorize when I was ten years old, and kids today are so stupid they can't do it [or the teachers are so stupid they don't make them do it]"
And my response is "You memorized it, you still remember it, but you didn't understand it THEN [probably cuz you were young]...and you don't understand it NOW [and that's cuz you ain't really too smart and never were]"

I don't want the DATA...I want to KNOW MORE.
[spoiler]Sig-man, Libtard, Stupid piece of shit. change your text color to brown. Mr. Reliable, bullshit-slinging liarFucker-user.[/spoiler]
the difference between evidence and sources: whether they come from the horse's mouth or a horse's ass.
"Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation."
the hyperbole is a beauty...for we are then allowed to say a little more than the truth...and language is more efficient when it goes beyond reality than when it stops short of it.
User avatar
ussusimiel
The Gap Into Spam
Posts: 5346
Joined: Tue May 31, 2011 12:34 am
Location: Waterford (milking cows), and sometimes still Dublin, Ireland

Post by ussusimiel »

The Singularity always seems inevitable, doesn't it? 8O

And it's always a non-human entity proscribing reality for humanity!

Like Vraith, I long for a more extensive human thinking/understanding. Not solely more extensive thinking/understanding, but more extensive thinking/understanding of what it means to be human. It's here that the difference between what an AI can offer and what the extension of human awareness can offer, is made obvious.

I believe that we haven't even begun to scratch the surface of what it means to be human and so, exploring machine consciousness, is futile/premature. Let's not ascribe consciousness to machines when we haven't fully comprehended the capacities of our own native consciousness.

Let's get a handle on our own potentials before we begin to confer them on machines we design. If we don't then the judgements that the Singularity may make on us, may actually be justified!? 8O

u.
Tho' all the maps of blood and flesh
Are posted on the door,
There's no one who has told us yet
What Boogie Street is for.
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

Well I'd be quite happy just to be able to understand what it is that I currently know but thats by the by :lol: .

Asimov [clever guy that he was] came up with his 'rules' to ensure no 'robot' ever harmed a human, but those rules were so full of logical inconsistancies as it was to ensure that they were virtually worthless, but it always strikes me that were we to develop anything remotely corresponding to what we consider to be AI, that it would per se need to be capable of self-improvement and would thus, at will, be quite capable of over-riding or circumventing any program related safeguards we had put in place anyway.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

Vraith wrote:
I don't want the DATA...I want to KNOW MORE.
You are describing real intelligence, which cannot be artificially created or enhanced no matter how fast and/or powerful your internal processors might be. Either you have it or you do not.

wayfriend raises some excellent points--if a computer connection exists between my brain and some implant then there is a vulnerability to intrusion via the software, either in the OS or in the drivers running the wetware.
The Tank is gone and now so am I.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19636
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

Just because your brain was enhanced with computers doesn't necessarily mean it would be hackable in the sense that we usually think of it. For that to happen, your brain's computer would have to be connected to the Internet or have some wireless receiving capacity. You could easily enhance the brain's capacity without connecting it to anything external ... just like a pacemaker (though even pacemakers are starting to have wireless capabilities). That might seem to limit us to our own thoughts and defeat the purpose of having "the Internet in my head," but that's just an argument about the input device. We could load whatever info we wanted, either through the usual channels (think of Data from Star Trek being able to read super fast) or by discretely loading info that has been thoroughly checked for viruses. Or, we can simply increase our brain's speed/memory/connections without loading any info (software) at all. Increasing our brain's hardware electronically wouldn't make us any more hackable than we already are. Once we get to such a point, we might already have devices that can directly control the electronic activity of the brain externally, implants or not.

Adding RAM to your computer doesn't increase its vulnerability. Nor does adding a larger hard drive. Computers were extremely useful for decades before we ever connected them to each other.

As for AI, I think if we can actually discover the secret to sentience and intelligence to such a degree that we can recreate it ourselves, we'll find that it's very familiar. In fact, it will think a lot like we do, because the consciousness and intelligence we build (after discovering how ours works) will be in our own image. But I think in phenomenological terms, there are general features of consciousness that any sufficiently sentience/intelligent creature will share. Intelligence is an awareness of meaning in reality. Reality itself will dictate the nature of intelligence, artificial or not.

What we need to fear is not intelligent computers, but insane computers. The more reality-based their consciousness is, the less we need to fear. Error correction is the key.
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

Zarathustra wrote:As for AI, I think if we can actually discover the secret to sentience and intelligence to such a degree that we can recreate it ourselves, we'll find that it's very familiar. In fact, it will think a lot like we do

And therein possibly, lies the root of our need to fear [re the 'insane' computer comment made late]. ;)

re The possibility if 'wet-ware - hard-ware - soft-ware' integration', have we any actuall evidence that there is real correspondance between the way our brain functions [down at the nuts and bolts level] and the way computers function such that any actual integration of the type you refer to would ever be possible. Might we be missing the point that, in computers what we have is a 'simulation' of 'mental activity'; a 'faux' copy of the real thing such that to attempt to mix the two is as to mix chalk with cheese.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

[Wow! That went wrong somewhere, but I might as well quote myself - hell, who else is going to ;)]
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Zarathustra
The Gap Into Spam
Posts: 19636
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

At this point, I agree that it would be like mixing chalk/cheese. Computers based on today's "understanding" of consciousness/intelligence and today's computer technology would merely be a simulation at best. That's why I think AI will not be conscious for a very long time, and hence not very intelligent, if at all. But we'll solve that puzzle eventually. Not in the next few decades.

However, I do think it would be possible to build artificial circuits in our brain to off load certain tasks like memory, math, processing, etc. We have already had some success in building artificial "circuits." The cochlear implant is such a device. There were others mentioned in The Future of the Mind. We already enhance our brains with computers ... the only difference is that the interface is still external. The same thing could be done with internal interfaces. We'll continue to shrink that divide between ourselves and computers, even though essentially it will still be two separate things, brain + computer. Our consciousness itself won't run on the chips, not for a very long time (as I said above).
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

I think my wife believes my consciousness runs on chips already :lol: and she may well be correct.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Ur Dead
The Gap Into Spam
Posts: 2295
Joined: Tue Sep 12, 2006 1:17 am

Post by Ur Dead »

Put a million people working or 100 to develop an AI.
The chances that a Million people will come out with it first over 100.
The chances are greater of it having a million flaws versus 100 flaws.
It will takes 10,00 times longer to working out the million flaws versus
the one hundred..

Everybody has a preconceive notion on what is included for an AI to work.
No two people are exactly alike. So if I can't get my idea put in, why should I work on it? I could so something that can achieve the betterment in a more roundabout way.

The idea of an AI solving humanity's problem and making a better world can not be a solution until all of humanity have the same conscious conclusion.
No two people are alike.
Catch 22.
What's this silver looking ring doing on my finger?
User avatar
Vraith
The Gap Into Spam
Posts: 10621
Joined: Fri Nov 21, 2008 8:03 pm
Location: everywhere, all the time

Post by Vraith »

Ur Dead wrote: The idea of an AI solving humanity's problem and making a better world can not be a solution until all of humanity have the same conscious conclusion.
No two people are alike.
Catch 22.
Missed that. Maybe...maybe not...inherent unpredictability/uncertainty in it...

But, there are definitely certain classes of problems that an AI---or maybe even something well short of an AI---could solve or help us solve.
And solving them would make it possible to be more human generally, and each person specifically.
For instance, every problem that is based on physical resources/limits could be solved...
Every health problem could be solved...which clears out another enormous swath of waste, expense, and death.
Getting rid of those kinds of things makes really being one's self, being unique, being someone in pursuit of the things that are distinctly human [and wasting less time and resources doing the things that any beast---or even mindless viruses---can do] more possible and more meaningful. [I suppose it might also enable the opposite---everyone being the same, but I think that unlikely]

Anyway, I came here cuz I there was an article about a self-taught poker playing machine. And it plays the particular game perfectly.
What’s more, the program — dubbed Cepheus1 — is self-taught. Over two months, it played trillions of hands against itself. It learned what worked, what didn’t, and it improved. The game is now “solved” in the sense that you could play poker against Cepheus all day, every day for a lifetime and not be able to distinguish the program from the Platonic ideal of a poker player. Don’t believe it? You can play with Cepheus online.
Interesting thing, though, besides the game, is that the form of the algorithm/code and the self-teaching aspect opens up a host of possible applications---

This should take you to play against the machine, if you are so inclined

poker.srv.ualberta.ca/
[spoiler]Sig-man, Libtard, Stupid piece of shit. change your text color to brown. Mr. Reliable, bullshit-slinging liarFucker-user.[/spoiler]
the difference between evidence and sources: whether they come from the horse's mouth or a horse's ass.
"Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation."
the hyperbole is a beauty...for we are then allowed to say a little more than the truth...and language is more efficient when it goes beyond reality than when it stops short of it.
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

Wow - that would be a dangerous program in the hands of the online gambling fraternity [I asume the game linked to is a non-money affair, played for the interest value of challenging such a machine]. The risk is that the algorithms developed for this program could be 'scaled - back' to make it near impossible to beat - but not quite. This would be an irrefusable lure to some players and could cost them big time.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
Post Reply

Return to “The Loresraat”