Hawking warns of the dangers of AI

Technology, computers, sciences, mysteries and phenomena of all kinds, etc., etc. all here at The Loresraat!!

Moderator: Vraith

User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

Pretty much what I was getting at. In a milestone for artificial intelligence, a computer has beaten a human champion at a strategy game that requires "intuition" rather than brute processing power to prevail
and
"Such a search base is "too enormous and too vast for brute force approaches to have any chance"
are incorrect. They could not have programmed intuition into AlphaGo. It must be brute processing power that wins the games.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

I think that hoping an AI can become sentient is like asking if a program that simulates weather patterns will ever make rain.

So as long as your taking about AI being very smart, I am good with that. We can simulate intelligence, I have no doubt. But in the end, it's just simulated, it's not real. It's a program that models thought, but the 'thoughts' are just blobs of data attached to subroutines. They don't actually 'do' anything -- the subroutines just change the numbers and then we say 'that number means THIS!'.

That being said, 'intuition' is just applying predictive analysis to data outside the domain. E.g. "in similar situations, my opponent usually moves this way" and "when he is behind, my opponent will make more of these kinds of moves".

So if this program played the guy several times before it won ... and I think it did ... then, yeah, it could use something called "intuition".
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

Nothing but speculation, of course, but I suspect sentience will emerge, somewhere, eventually. It happened on Earth, eh? A fairly big spectrum of awareness. And here we are, seriously trying to make it happen, with a different medium, using as many approaches as we can think of. I guess it's possible that, with enough trial and error, someone will actually figure it out. But I'm thinking more along the lines of Robert Sawyer's WWW trilogy, and the Next Generation episode called Emergence.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61746
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 15 times
Been thanked: 21 times

Post by Avatar »

Hashi Lebwohl wrote: Given that Go is a much more open and complex game than chess it means there should be a higher probability of a human being able to defeat a computer even though the computer is able to process the most likely 1,000 next moves in under a second.
The computer could always process the next 1,000 moves in under a second. The thing is, with Go, that ability gives the computer no actual advantage.

All those programmed moves, all those games against itself...they almost don't matter. Like snowflakes, no two games of Go are ever identical.

No...knowing that somebody once played in that exact spot and won is meaningless, without the capacity to recognise why doing so was advantageous or otherwise.

The secret to Go, and the reason computers have always been bad at it, is pattern recognition. It's understanding that a random and unique pattern of stones contains a potential key to victory or territory gain or loss. By themselves, the positions mean little. It's what those positions may perhaps develop into.

It's not processing power per se. Any computer could run through every possible move faster than we can talk about it. It's picking the right move out of those thousands of permutations. Up until now, the computer could not do that, because in Go, there is no specific "right" move at any given time. What players aim for is the best move...the most promising one, the most elegant one.

Is it AI in itself? No it's not. But it's the foundation of it. Perhaps intuition is not the right word...it's more like the recognition of potential.

--A
User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

Sounds like a bunch of hooey to me. :D There's debate on whether or not intuition exists. If it exists, it's certainly not so clearly understood that it can be coded. Meaning they did not program it into the computer. If it exists, a human must surely have more of it than a computer that could not have been programmed with it has. Meaning intuition is not how the computer is winning.

The only thing they did was program as much knowledge about Go as they could into a computer that has high processing power, and let it gain massive - inhumanly massive - experience by playing against itself. Then it started playing people. AlphaGo's opponent is gaining as much experience against it as it is gaining against him. Meaning experience against the opponent is not how the computer is winning.

What's left to explain AlphaGo's superiority? The processing power. It may not have sufficient processing power to consider every one of the incomprehensibly massive number of possible positions, but it obviously has enough to consider a much, much, much larger number of them than the human opponent can, and calculate the odds on what it's opponent, whose tactics it now has data on, will do.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Wosbald
A Brainwashed Religious Flunkie
Posts: 6135
Joined: Sat Feb 07, 2015 1:35 am
Been thanked: 2 times

Post by Wosbald »

+JMJ+
wayfriend wrote:I think that hoping an AI can become sentient is like asking if a program that simulates weather patterns will ever make rain.

So as long as your taking about AI being very smart, I am good with that. We can simulate intelligence, I have no doubt. But in the end, it's just simulated, it's not real. It's a program that models thought, but the 'thoughts' are just blobs of data attached to subroutines. They don't actually 'do' anything -- the subroutines just change the numbers and then we say 'that number means THIS!'.
Word.

There could never be real AI, since one would have to construct an "open-ended systematic", which is a contradiction in terms. A program which has a core of indeterminacy is no longer a program.


Image
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

I was hunting for a better analogy.

Suppose someone draws a picture of a boy. Is it a boy? No.

You can draw successive pictures, and display them in a sequence, providing the appearance of motion. Is it a boy? No.

You can use a computer to render images that are so lifelike that they look photographic. And you can have it render images in response to input from a person, so that it seems to be responding to you. Is it a boy? No.

You can create a virtual reality in which this boy appears. You can make the responses so lifelike that they pass the Turing test. Is it now a boy? No.

This is what creating software-based AI is like. No matter how good you make it, it's not going to be alive. Because it's not a matter of quality or degree. It's a matter of something being real vs something being simulated. No matter how good is the simulation, it's a simulation - it's not real, it's drawn to appear real. Something, somewhere, is draws it. So something, somewhere, is the "puppetmaster".

This is why I like Frank Herbert's Destination: Void. Because they approach the creation of an artificial intelligence, not through successive stages of simulation, but by trying to create something physical.
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

I'm not sure about "alive". But I don't see any problem with awareness/consciousness. There's some sort of firmware in our brains, eh? Like I said above, I don't know that we'll ever understand it enough to be able to write a program for it, but I don't know why it should be less possible in the electronic medium than it is in the biological medium.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

Fist and Faith wrote:I'm not sure about "alive". But I don't see any problem with awareness/consciousness. There's some sort of firmware in our brains, eh? Like I said above, I don't know that we'll ever understand it enough to be able to write a program for it, but I don't know why it should be less possible in the electronic medium than it is in the biological medium.
What I am saying is that software isn't "a medium" in that sense - never was, and can never be.

Nothing simulated by software actually exists. We all know this.

If it doesn't actually exist, I don't think it can ever attain qualities like consciousness. It can only act like it has, when the program is running.
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

No, software is not a medium. Our minds are not a medium. They are the result of software and firmware running on the biological medium of our brains. I don't see why a different kind of awareness/consciousness could not exist as a result of firmware and software running on the electronic medium of computers
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61746
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 15 times
Been thanked: 21 times

Post by Avatar »

Fist and Faith wrote:There's debate on whether or not intuition exists. If it exists, it's certainly not so clearly understood that it can be coded. Meaning they did not program it into the computer.
Nobody programmed it into us either...maybe it just...arises...

--A
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

Increasingly neurologists are abandoning the old model of the brain as an organ divided into this area doing this and that area doing that. Instead is emerging a picture of brain function, including and perhaps most significantly the higher functions of self awareness and abstract thought, as a diffuse activity that trades in only one coin, that of electrochemical signals, which by virtue of the sheer complexity and number of connections in its neural network, and as long as these are input with some discernible structure, it will have the ability to sort into a meaningful sensation. This is how the miracle of cochlear implants providing hearing for the deaf is achieved: you don't have to connect up the individual nerve fibres to the brain - as long as you get the signals in the brain does the rest.

The point relating to this post is, as Av implies above, the functions of intuition, consciousness, abstraction and the like are in this model emergent properties of the complexity f the network. The neurologist I saw putting forward these ideas said that taken to their extreme one could envisage say a huge city or even a world developing these same emergent properties simply by virtue of the myriad of human connections that go to make it up in Toto. At some point our networking creations within the programme of computers will pass this threshold - this is inevitable. The AI we strive for will create itself, it will not arise when we decide it should.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

Fist and Faith wrote:Nothing but speculation, of course, but I suspect sentience will emerge, somewhere, eventually. It happened on Earth, eh? A fairly big spectrum of awareness. And here we are, seriously trying to make it happen, with a different medium, using as many approaches as we can think of. I guess it's possible that, with enough trial and error, someone will actually figure it out. But I'm thinking more along the lines of Robert Sawyer's WWW trilogy, and the Next Generation episode called Emergence.
It would seem I agree with you two. I'm just not agreeing that it has happened in AlphaGo with intuition. I think intuition is total nonsense. I've had times when I've had a feeling about something, and chosen correctly from among many possibilities. But there have been a LOT more times when I've chosen incorrectly, despite a strong feeling. That's how it always goes, whether we're discussing intuition, ESP, divine intervention, of whatever. There's a small percentage of times it works out the way we wanted, and a large percentage when it doesn't.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

Certainly true Fist - I'm sorry I missed your above quote; I scanned the thread not giving it the attention it deserved and missed the point you made. But I'm not sure I agree that intuition is merely hapenstance in this manner; I think we percieve many 'signals' that we are not conciously aware of and these to a degree underlie what we experience as intuition. we have evolved to place a degree of trust in these feelings because at times they do work in our interest, and in our raw state of existance it was better to trust them and be wrong than to ignore them and suffer the consequence of their being right.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61746
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 15 times
Been thanked: 21 times

Post by Avatar »

peter wrote:...the functions of intuition, consciousness, abstraction and the like are in this model emergent properties of the complexity of the network.
Heinlein was implying this in the 60's, in The Moon Is A Harsh Mistress, with his self-aware computer "Mike" and the first time I read it, it rang so true that it's a view I've taken ever since.

Card used the concept as well in his Ender books for Jane, his emergent AI.

The whole is greater than the sum of its parts. :D

--A
User avatar
peter
The Gap Into Spam
Posts: 11577
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 6 times

Post by peter »

Did these authors write with any kind of scientific background from which to draw the idea of emergent AI, or we it just good old prescient thinking?
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

Most authors I have seen take the view that simply making computers big enough and fast enough will trigger some kind of "magic" that causes consciousness. If there's any underlying assumption at all, it's the notion that if you can make a calculation-device so complex that it becomes unfathomable, it will begin to spawn internal processes that are akin to "independent thought" and self awareness.

This is often but not always decorated with romantic notions about prerequisites like needing someone to love, or being able to kill, or feeling threatened, or something like that.

I believed in this kind of thing for a while, but eventually I came to realize it was magical thinking, because there's always a core of "and then it just happens" in the middle of it.

Don't get me wrong - I think the inclusion of AI in sci-fi is important and useful. Cautionary tales included. I just don't respect the origin stories, that's all.
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23649
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 33 times

Post by Fist and Faith »

wayfriend wrote:Most authors I have seen take the view that simply making computers big enough and fast enough will trigger some kind of "magic" that causes consciousness. If there's any underlying assumption at all, it's the notion that if you can make a calculation-device so complex that it becomes unfathomable, it will begin to spawn internal processes that are akin to "independent thought" and self awareness.

This is often but not always decorated with romantic notions about prerequisites like needing someone to love, or being able to kill, or feeling threatened, or something like that.

I believed in this kind of thing for a while, but eventually I came to realize it was magical thinking, because there's always a core of "and then it just happens" in the middle of it.

Don't get me wrong - I think the inclusion of AI in sci-fi is important and useful. Cautionary tales included. I just don't respect the origin stories, that's all.
In WWW: Wake, Robert J Sawyer does an excellent job of making awareness/consciousness emerge within the internet in an organic way. Whether or not it could happen that way... You may be right. But you may be wrong. In The Accidental Mind, David J Linden says that the common sayings about the human brain being an elegant, logical structure are way off. Our brains are a bunch of parts slapped together. We certainly wouldn't design anything so sloppy. It's rather amazing that it does what it does. Our consciousness is emergent.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

I don't think Hawking worried about the danger of AIs becoming "racist assholes". But this is breaking the internet this week, so ...
Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay -- a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."

Unfortunately, the conversations didn't stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay - being essentially a robot parrot with an internet connection - started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.

TayTweets: @NYCitizen07 I fucking hate feminists and they should all die and burn in hell.

TayTweets: @brightonus33 Hitler was right I hate the jews.

Now, while these screenshots seem to show that Tay has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through Tay's tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "repeat after me," it will - allowing anybody to put words in the chatbot's mouth.

TayTweets: @godblessameriga WE'RE GOING TO BUILD A WALL AND MEXICO IS GOING TO PAY FOR IT!

However, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), before it replied to the question "is Ricky Gervais an atheist?" by saying: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

[,,,] It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash? There are plenty of examples of technology embodying - either accidentally or on purpose - the prejudices of society, and Tay's adventures on Twitter show that even big corporations like Microsoft forget to take any preventative measures against these problems.

[link]
.
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

Engineers at Anki are proud to present Cozmo, an interactive toy robot which can learn by interacting with you and develop a quasi-personality.
All that power goes into making the robot react to its surroundings in an organic way. Cozmo snores while it sleeps, and an emotion engine allows it display an array of different expressions on its "face", meaning the palm-sized robot can look impatient when it wants to play a game, and then also display anger if it's beaten at that game.

Speaking of games, it's possible to choose between a range of different ones to play with Cozmo through the companion smartphone app. After a while, though, the app isn't necessarily required, because Cozmo will recognize your face and remember which games you most enjoy.

The more time you spend with it, the more games and actions you unlock, too. Developers even say it will give you a nudge if you're not paying it enough attention, like a tiny robot boyfriend or girlfriend.
Great--just what we need, a robot that becomes overly attached. Sorry, Cozmo, but if I wanted someone to be that attached I would prefer Laina to you.
So, how did the team an Anki get their robot to express emotions like a human? By taking inspiration from the world of animation. Those eyes, for example, look remarkably similar to the eyes on Eve from Wall-E, and the movement patterns are programmed with the same Maya software professional animators use.

Rather than having hard and fast rules for each action, animators can set a range within which the action can take place. There are also little touches, like the eyes and head following you as you move, or the eyes flashing with recognition when Cozmo sees someone it knows. These are intended to make interacting with Cozmo feel more organic, and make us more likely to connect with the robot.
Again, sorry, but if the little shit is going to get "angry" when it loses the game we are playing (by the way, you just lost the game--now there is a blast from the past) then I would rather not play with it. I am also not going to make any comments about looking forward to playing games with my robotic companion....

For a fun psychological experiment I should get a Cozmo, though, then abuse and neglect it to see how it reacts. "Sorry, Cozmo, I didn't mean to leave you in the attic alone for 4 months, you disgusting, miserable, little pile of cat vomit. I am sorry I ever bought you and brought you home. See you next Christmas."

That being said, I cannot reduce the probability of Talky Tina down to zero, so I probably won't.
The Tank is gone and now so am I.
Post Reply

Return to “The Loresraat”