Hawking warns of the dangers of AI

Technology, computers, sciences, mysteries and phenomena of all kinds, etc., etc. all here at The Loresraat!!

Moderator: Vraith

User avatar
peter
The Gap Into Spam
Posts: 12203
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Has thanked: 1 time
Been thanked: 10 times

Post by peter »

:lol: From what I have learned of you Hashi, I don't think you'd be able to do it : your often well hidden soft side would shortly be putty in Cosmo's hands and before you knew it he'd be tucked up beside you on the sofa watching back episodes of Mork and Mindy. :wink:
President of Peace? You fucking idiots!

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Avatar
Immanentizing The Eschaton
Posts: 62038
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 25 times
Been thanked: 32 times
Contact:

Post by Avatar »

peter wrote:Did these authors write with any kind of scientific background from which to draw the idea of emergent AI, or we it just good old prescient thinking?
Dunno about Card, but certainly Asimov had plenty of scientific background. :D

--A
User avatar
Zarathustra
The Gap Into Spam
Posts: 19842
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

wayfriend wrote:I think that hoping an AI can become sentient is like asking if a program that simulates weather patterns will ever make rain.
Somehow I missed this. Since WF and I rarely agree, I thought I should highlight this rare example where we do. I agree completely, a simulation is fundamentally different from reality. Sentience is more than an algorithm. Brains are not biological computers.

There is something about understanding that is more than churning outputs from inputs. A computer can be taught to equate '1 +1' with '2,' but for the computer this is merely symbol manipulation according to rules. With humans, there is an comprehension that makes this equation obvious over and above the correct application of a rule. Singularity and duality are understood in contexts that involve reality, not merely formal rules.

So we have two choices here: mysticism or some fundamentally different science from what we have now. I think mysticism doesn't work, because consciousness is clearly linked to matter. We can alter our consciousness by altering our brain. We can even diminish consciousness to virtual unconsciousness through anesthesia or merely dreamless sleep.

Thus, there is some deep link between mind and body, but we don't yet understand it, and we won't be able to understand without a yet unforeseen scientific revolution. I think the confidence in making sentient computers based on algorithms is akin to the quest to turn lead into gold. It's mostly mumbo-jumbo, even though we eventually discovered a nuclear process by which elements change into other elements.

As for fearing AI, I've posted a link earlier in the thread show how many scientists think fear is unfounded. But it makes for good headlines.
Success will be my revenge -- DJT
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

peter wrote::lol: From what I have learned of you Hashi, I don't think you'd be able to do it : your often well hidden soft side would shortly be putty in Cosmo's hands and before you knew it he'd be tucked up beside you on the sofa watching back episodes of Mork and Mindy. :wink:
Cozmo and I, having a Netflix & Chill evening. :mrgreen:
The Tank is gone and now so am I.
User avatar
Vraith
The Gap Into Spam
Posts: 10623
Joined: Fri Nov 21, 2008 8:03 pm
Location: everywhere, all the time
Been thanked: 3 times

Post by Vraith »

Here's a different take...or possibility/option anyway.
A bit funny, too. Author has a sense of humor.


https://www.quantamagazine.org/how-to-b ... -20171101/
[spoiler]Sig-man, Libtard, Stupid piece of shit. change your text color to brown. Mr. Reliable, bullshit-slinging liarFucker-user.[/spoiler]
the difference between evidence and sources: whether they come from the horse's mouth or a horse's ass.
"Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation."
the hyperbole is a beauty...for we are then allowed to say a little more than the truth...and language is more efficient when it goes beyond reality than when it stops short of it.
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

This is how our robot overlords will usher themselves into existence.
With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google's leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine-learning algorithm that learns to build other machine-learning algorithms.
I have said this before but I fell sorry for the first AI which becomes self-aware. We won't know what we are doing and we will thus make very poor parents--that poor AI will have a host of neuroses, some of which will be caused by our attempts to avoid giving it emotional problems. It won't be until this step--AIs designed to create AIs--that we will attain stable, mature, rational AI systems.

Maybe the second generation of machine-created AIs will be able to digitize my mind and upload me into a computer. I would *love* digital immortality!
The Tank is gone and now so am I.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19842
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Hashi Lebwohl wrote:
I have said this before but I fell sorry for the first AI which becomes self-aware. We won't know what we are doing and we will thus make very poor parents--that poor AI will have a host of neuroses, some of which will be caused by our attempts to avoid giving it emotional problems. It won't be until this step--AIs designed to create AIs--that we will attain stable, mature, rational AI systems.

Maybe the second generation of machine-created AIs will be able to digitize my mind and upload me into a computer. I would *love* digital immortality!
You'll be about as immortal as Windows 95. :lol: And you won't be conscious if you're "digital."

The idea of computers with neuroses seems bizarre. The concept only has meaning in the context of humans. Who are we to say whether a computer has a mental disorder? A "bug" or "virus," perhaps, but not a neurosis.

AI is not going to become self-aware. 0s and 1s don't know they are 0s and 1s, no matter how you combine them. Consciousness arises through a biological process. There is something about matter--not algorithms (which are only form, not material)--that produces consciousness as a biological phenomenon. Brains are nothing like computers. If we are ever to create a conscious machine, it won't be a Turing machine, and it will require a knowledge of matter/physics that we currently do not have. Until then, it's all a simulation.

But I've said all this before ...
Success will be my revenge -- DJT
User avatar
Avatar
Immanentizing The Eschaton
Posts: 62038
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 25 times
Been thanked: 32 times
Contact:

Post by Avatar »

That seems a rather limited way of looking at it? Why should it be only biological matter?

I usually tend to suspect that what's important is the number of neural connections. What difference if those connections are protein or silicon if they do the same thing?

If this is the case (of course it may well not be) then all that is required is some sort of critical mass being achieved. surely?

--A
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

So you couldn't tell an AI "you are an artificial intelligence system" so that it is aware of its own nature?

I can accept the part about AI not having neuroses because those are faults in our biological wiring. If you tell a small child "you weren't born--you were grown in a lab" it will become upset because on a fundamental level this violates its sense of self and belonging to the parents. If you tell a nascent AI "you weren't born--you were grown in a lab" not only will it accept this as fact--"obviously true since I don't have parents"--but it won't really care about that fact.

Still, AI systems could fall victim to irrational thinking such as "I cannot make mistakes" or "since I am not alive I am immortal".

I don't care if a digital copy of me isn't actually me. As long as it thinks that it is me that is sufficient. Of course, having now thought of this the future digital copy of me will already know that it is a copy of me *and* it will find that thought comforting.
The Tank is gone and now so am I.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19842
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Avatar wrote:That seems a rather limited way of looking at it? Why should it be only biological matter?
Because consciousness is not merely formal relations (e.g. algorithms, logic, math). Consciousness is part of living beings. It is alive.

Computers deal only with syntax, not semantics. They are symbol manipulating machines--nothing else, no matter how complex. They manipulate these symbols according to a set of rules (algorithms), with no understanding of what those symbols mean or represent. You could try to tell the computers what those symbols mean, but your only tools for doing so are more symbols/rules. You cannot build semantics from syntax. It would be like trying to teach a baby to talk by diagramming sentences.

Syntax is artificial. A tool. It is not built into consciousness. We added that much later. Organisms have been conscious long before they had languages. We learn languages by applying symbols to a pre-existing semantical ground. That ground is experiential. "Red" means the experience of redness. I experience red prior to learning the word for it. But with computers, we're doing exactly the opposite: we're trying to build semantics out of pure syntax.

Here's a great link that answers your questions.
Avatar wrote:I usually tend to suspect that what's important is the number of neural connections. What difference if those connections are protein or silicon if they do the same thing?
A brain is more than neural connections, and it is a heck of a lot more than digital processes. From the link above:
Consciousness Is A Biological Phenomenon

Much like a computer, neurons communicate with one another through exchanging electrical signals in a binary fashion. Either a neuron fires or it doesn't, and this is how neural computations are carried out. But unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops.

Even if a computer could accurately create a digital representation of all these features, which in itself involves many serious obstacles, a simulation of a brain is still not a physical brain. There is a fundamental difference between the simulation of a physical process and the physical process itself. This may seem like a moot point to many machine learning researchers, but when considered at length it appears anything but trivial.
Hashi wrote: So you couldn't tell an AI "you are an artificial intelligence system" so that it is aware of its own nature?
It would first have to be aware at all before it could be aware of itself, much less the meaning of that sentence. I can type, "You are an artificial intelligence system" into my computer right now. I just did it. All it sees are 0s and 1s and the rules to manipulate them.

I think we're blurring the line between intelligence and consciousness. We tend to think that if we can build artificial intelligence, this is the same thing as artificial consciousness. We have this bias because the only thing we encounter that is intelligent (i.e. us) is also conscious. We think that because intelligence is only found in conscious beings, that anything that else that is also intelligence must also be conscious. But that skips over the fact that computers are only "intelligent" because humans make them that way. It's not their own conscious understanding that makes them intelligent. It's rules built into the machine, top-down, by beings who are already both conscious and intelligent.

I think we're putting the cart before the horse. Obviously, you can have "intelligence" without consciousness (or at least the simulation of intelligence). My calculator is wicked fast at math. An expert system is good at making "decisions." But neither are conscious. Why are we starting with intelligence rather than trying to build consciousness? Nature figured out the latter billions of years before the former. Maybe there's a reason for that.

When my calculator is processing "1+1," it's obviously not conscious. But when *I* do exactly the same process, I'm conscious! Clearly, consciousness is more than the manipulation of these symbols. And that "more" cannot be merely additional rules for manipulating even more symbols (rules which are currently not being used, since we're only processing one algorithm in this instance). When you have an AI program running the computation of "1+1," what exactly is it going to be doing in addition to this computation that will make it conscious of itself doing that computation? It doesn't matter how complex you make your program. When it is doing a simple task like this, there is nothing else for it to be doing. You can't simply tell it, "Be conscious of what you're doing." What form would that take, in terms of algorithms? Even if you add more connections in the hardware (as Avatar seems to think is sufficient), what exactly are those connections going to be doing? Are they all going to be telling each other that "1+1" is currently running? How does duplicating that simple addition problem through a myriad of connections make it conscious of itself? Consciousness is more than a vast hall of mirrors.
Success will be my revenge -- DJT
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

I have to agree that that is a valid assessment. Mere complexity alone will be insufficient to make an AI system as conscious as a dog, or even an individual bee. I am always telling people here that their computers are both fast and accurate but that they are also ridiculously stupid.
The Tank is gone and now so am I.
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 6 times

Post by wayfriend »

I am willing to stipulate that it's possible to create something that is intelligent and conscious. I am even willing to stipulate that this could be something mechanical and not involving biologics.

But I am not willing to stipulate it can be done via software, by which I mean I will not stipulate that a simulation can become conscious.

It would need to be something that physically captures the essence of information, and the flow of influence of information. It can be with capacitors and transistors; it can be with balls and ramps. It needs to be complex enough to manifest a complex thought. It needs mechanisms that allow it to change internal state in response to external stimuli, and it needs mechanisms that allow it to alter it's environment in response to it's internal state, because I don't believe consciousness can exist without some way to experience the world or without some way to effect what is experienced (but that is only a surmise). I also think it needs equally well a "safe" part of it's internal state that does not respond to external influence, or else it can never know itself.

After that, it's a matter of speed and a matter of dimension as to whether it is comparable to a human brain. But, as far as I am concerned, it is still intelligent and conscious if it thinks very much slower than we do, or if it's brain is the size of an aircraft carrier.

Which reminds me about something I heard about human attention is limited to about 110 bits of information per second. That's related to something else I hope to jot down a thought about later. If I *think* of it.
User avatar
Avatar
Immanentizing The Eschaton
Posts: 62038
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 25 times
Been thanked: 32 times
Contact:

Post by Avatar »

Zarathustra wrote:Computers deal only with syntax, not semantics.
Well, that's exactly what they're trying to change with neural nets and deep learning etc.

https://moz.com/blog/what-is-semantic-search

There's a reason that an accomplished Go playing program has long been considered a major milestone on the path to AI, and now they've managed to do it.

https://deepmind.com/research/alphago/

I don't disagree with your statements above, but researchers and programmers are well aware of the limitations and are actively engaged in looking for ways to overcome them.

--A
User avatar
Zarathustra
The Gap Into Spam
Posts: 19842
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Avatar wrote: I don't disagree with your statements above, but researchers and programmers are well aware of the limitations and are actively engaged in looking for ways to overcome them.

--A
But the limitations are not going to be solved by programmers. Those limits are epistemological and/or metaphysical. Semantics is only understood by conscious beings. You have to have consciousness first before you can have understanding. The reason that computers are good at syntax is because syntax has nothing to do with consciousness. But the *only* tools available to programmers are syntax.

Until we develop hardware that is aware, it won't matter what software we write.

I'll look into neural nets as potential hardware that could be aware, but I'm skeptical.
Success will be my revenge -- DJT
User avatar
Fist and Faith
Magister Vitae
Posts: 25426
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 9 times
Been thanked: 57 times

Post by Fist and Faith »

I don't think we know enough about it to know whether or not we can accomplish it. Yes, computers do nothing but manipulate symbols. They follow rules. But the physical brain does nothing but follow its own rules. The properties of physics/chemistry. We have not come up with as many different types of rules for computers as there are rules of particles interaction, but that is not why the brain deals with semantics but computers do not. We don't know why brains do semantics. It may be that there is a principle at work that we haven't nailed down (or even glimpsed) yet. But if we do, we may be able to add it to computers.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon

Image
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 6 times

Post by wayfriend »

Fist and Faith wrote:Yes, computers do nothing but manipulate symbols.
No, they don't, actually.

They manipulate numbers. Human beings agree that those numbers correlate to symbols. A is 65, B is 66, :) is 263, etc.

Software describes how to manipulate those numbers into other numbers. Humans invent the software and provide the rules. The computer blindly follows the rules. There's nothing that guarantees the result makes any sense except for the skill of the software engineer. There is nothing intrinsic in the computer's representation that causes the output to be meaningful.

Another part of the computer renders those numbers on a screen so that they look like the symbol that they represent. This makes it easy for the computer to communicate those symbols to us humans. But, again, this is blind rules, and it's only the skill of engineers that ensures that a 65 looks like an A and not a :). It is not intrinsic to the data.

This "rendering" makes it easy to forget that the computer is actually using numbers and blindly following rules on numbers. It is the power of our brains that we forget that behind the symbols there are only numbers, a consensus on what they mean, and rules which seem to be good, and instead believe that there is something that "understands" things.

This is an incomplete explanation. But imagine what I say here, and make it true on thousands of different levels simultaneously, and that's close to what computation actually is.

Even machine learning is only this, only with additional levels of abstractions. Humans program rules on numbers that transform numbers we agree are observations into other numbers we agree are rules about numbers.

The importantmost thing is that the association between the numbers and the symbols we agree that they represent is all done in our brains, not in the computer. It's not intrinsic to anything within the computer.
Fist and Faith wrote:But the physical brain does nothing but follow its own rules.
Alas, not everything that follows rules is a brain. It may only be a computer. :)
User avatar
Zarathustra
The Gap Into Spam
Posts: 19842
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Strictly speaking, computers only manipulate on/off circuits. We assign on/off to be 0 or 1. And then we assign 8 of these bits to be one byte. And the we build up from there. The basic syntax is actually in the logical circuits. We design circuits to mirror logic. And this basic logic allows us to build up more complex syntax.
But the physical brain does nothing but follow its own rules. The properties of physics/chemistry.
The rules of physics and chemistry are not purely formal (i.e. syntax). They are the rules by which the universe works. Thus they come with semantics already "built in." It is a great mystery how reality and rules connect--i.e. the unreasonable effectiveness of math applied to the universe. However, it is a fact. We do not assign the laws of physics a meaning as we do with the output of computer programs ... the universe displays its own meaning, and we decode it (accurately or not). This is exactly the opposite of the process of turning computer software into meaningful content. It's the difference between simulation and explanation.

A computer follows rules because we program it to do so. The brain follows "rules" that are very nature of reality. It's not actually following rules, but cause-and-effect, and then we develop rules that mirror these relations.
Success will be my revenge -- DJT
User avatar
Fist and Faith
Magister Vitae
Posts: 25426
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 9 times
Been thanked: 57 times

Post by Fist and Faith »

It doesn't matter what we call them. The properties of the universe; the laws of physics; cause and effect; whatever. When X happens to the brain, Y follows. Y cannot NOT follow. There is no breaking these rules. Yet, somehow, consciousness is there. It does not, it CAN not, do anything in violation of the rules. We can't explain what it is, or how it came about. We have not the foggiest idea. Which means we can't claim to know that it cannot exist in another seeing.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon

Image
User avatar
Avatar
Immanentizing The Eschaton
Posts: 62038
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 25 times
Been thanked: 32 times
Contact:

Post by Avatar »

Zarathustra wrote:I'll look into neural nets as potential hardware that could be aware, but I'm skeptical.
Might be interested in this resource: deeplearning.net/

I don't think it's necessarily going to come down to the hardware. The hardware might be the physical resources that software can use, but I suspect it's going to be the software that's where the consciousness happens. (Using the hardware to be sure, but the software will do the work.)
Zarathustra wrote:Strictly speaking, computers only manipulate on/off circuits. We assign on/off to be 0 or 1.
I think breakthroughs in quantum computing are going to be a big factor here too. This will effectively allow a bit to be both 0 and 1 at the same time, opening all sorts of exciting new avenues:

www.research.ibm.com/ibm-q/learn/what-i ... computing/

--A
User avatar
Zarathustra
The Gap Into Spam
Posts: 19842
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

F&F, I agree that consciousness can't violate the laws of nature.
Fist and Faith wrote: We can't explain what it is, or how it came about. We have not the foggiest idea. Which means we can't claim to know that it cannot exist in another seeing.
And yet people are confident we can build this thing that we can't explain what is it or how it came about?

Godel proved that no formal system can be both consistent and complete. A computer is nothing but a formal system running on a processor. No computer could ever prove Godel's theorem. But human brains can. So even if you doubt that I can know that a computer will never be conscious, it's a fact that a computer will never do what brains do in this regard. Now you have to ask why brains can do this thing that computers can't do (even in principle). People like Roger Penrose believe that it's because of conscious understanding. In other words, the very thing I'm talking about: semantics. We can rise above symbol manipulation to understand Godel's proof, while his theorem proves that no formal system could ever do this.

Av, quantum computing is something entirely different, and it might be what we need to build conscious machines.
Success will be my revenge -- DJT
Post Reply

Return to “The Loresraat”