Scientists: Artificial life likely in 3 to 10 years

Technology, computers, sciences, mysteries and phenomena of all kinds, etc., etc. all here at The Loresraat!!

Moderator: Vraith

User avatar
Zarathustra
The Gap Into Spam
Posts: 19846
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Cail wrote:. . . we have a very, very limited experience with what the definition of terms like consciousness is, and what the nature and manifestations of consciousness are.
On the contrary, I believe we have an extensive and intimate experience with what it means to be conscious, since that's exactly what our being consists of. I think we're perfectly entitled to define consciousness in terms of how we experience consciousness. Isn't our consciousness a valid manifestation?

If there is a "general" definition of consciousness which encompasses all forms of consciousness, then it must also encompass ours. Thus, our consciousness partakes in this general nature. And therefore, we're in a perfect position to contemplate that nature since we partake of it.

Here's some food for thought: John Searle's Chinese Room thought experiment. This is basically the argument I'm making.
Success will be my revenge -- DJT
User avatar
Cail
Lord
Posts: 38981
Joined: Mon Mar 08, 2004 1:36 am
Location: Hell of the Upside Down Sinners

Post by Cail »

Sure our consciousness falls under the general definition, but it's only one example. Sort of like looking at a duck and making the assumption that all birds swim.
"There is only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences." - PJ O'Rourke
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Malik23 wrote:
Loremaster wrote: Our neural networks compute - process - data.
Our neural networks may be capable of carrying the computations we perform in our thoughts, but computations isn't what neurons do. Sensory input isn't data. Data is pure information. Data is the abstract formalization of input into binary numbers. A photon striking my retina isn't data. The electrical impulses which register this impact aren't data, either. Nowhere in our neurons is physical input translated into information. That is done at a higher level than the neurons themselves. That is done in our mind, our thoughts, our understanding.
How is it not data? Sensory input is 'converted' into electrochemical signals. That's information - a signal sent to the brain from the sensors. Then the neural networks process that information
Malik23 wrote:That is done at a higher level than the neurons themselves. That is done in our mind, our thoughts, our understanding.
As in what organ? Are you suggesting a soul or something beyond the brain? What evidence do you have for higher processing beyond the cortical layers of the brain? :)
Malik23 wrote:Irrational processes do not derive from computation.
Yes they do. It's called biased thinking or lack of thinking. All research in psychiatry and psychology points to the fact that there is a flawed logical process behind irrational thoughts (i.e. depression is characterised by biased processes). The brain receives sensory data, processes it based upon the nature of the data and a knowledge base. Basic cognitive science.
Malik23 wrote: AI has never been about making conscious machines, because we don't have the slightest clue how to produce consciousness. No, AI has always been about mimicking human actions, the output of our conscious thought. Creating machines which can respond "intelligently" to their environment is a completely separate issue from creating a machine which has a mind.
I am afraid that many researchers into AI have been looking in to the nature of consciousness and to try and make AIs having it. You're taking a literal definition of AI - artificial intelligence - and thinking that's all it's about. To make artificial minds, one has to try to replicate consciousness. As far as I know, there is no scientific law against it.

Unless one believes that God created all life, you have to accept that eventually we will develop thinking machines. Evolution managed thinking organisms, yet so far there has not been a single reason cognitions reside only in 'organic structures'.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Fist and Faith wrote:(And to make the discussion even more difficult, Loremaster does not think free will exists);
Cause and effect, Fist.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
Fist and Faith
Magister Vitae
Posts: 25492
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 9 times
Been thanked: 57 times

Post by Fist and Faith »

I'm sayin'!! :D
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon

Image
User avatar
emotional leper
The Gap Into Spam
Posts: 4787
Joined: Tue May 29, 2007 4:54 am
Location: Hell. I'm Living in Hell.

Post by emotional leper »

Free will does not exist, however, it might as well exist, since we cannot determine the outcome before hand, even though the outcome is already determined.

Free will is the prisoner rattling the bars of his cage.
B&
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

If a man is the product of his upbringing - all that he is, that he has learned and experienced - and the world obeys immutable laws, then the man is the product of the universe. The mind, being also a product, encounters events that have come about because of their antecedents, and acts upon them in a way that it has learned or because of the way it has been made.

In short, humans act the way we do because of what we have encountered before. Separate two identical twins and they will grow up and act differently if they have been/are in two unique environments. But all that act is the product of the world about them. No man is in a vacuum, yet free will thinks it is.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
Zarathustra
The Gap Into Spam
Posts: 19846
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

How is it not data? Sensory input is 'converted' into electrochemical signals. That's information - a signal sent to the brain from the sensors. Then the neural networks process that information
Electrochemical signals aren't in themselves information (though we can use electrochemical signals to transmit information.)

Are the electrical currents in your home's wiring information? No. Information is a symbolic representation. Electrical signals can be used to symbolize and model information, but information isn't in electrical signals.

My car receives physical input from my steering wheel. I turn it, and a complex series of actions transfer turning-the-steering-wheel into the-front-tires-turning. But this transference of physical action doesn't mean that my steering wheel is processing information. It's merely transferring a physical action. Like dominos falling. The same thing happens with our retina and optic nerves.

Information contains semantic meaning. Electric signals don't have semantics, only form and structure (syntax).
Are you suggesting a soul or something beyond the brain? What evidence do you have for higher processing beyond the cortical layers of the brain? Smile
I'm not suggesting a soul. I'm saying that information only becomes meaningful to a mind. The computer may be able to display a picture of a sunset on its monitor. But just because it can process the correct electric signals to produce this picture doesn't mean that it understands it is producing a picture of a sunset. Yet, we do understand what our electrical signals are "saying." This understanding is itself something extra, something more than the electrical signals. Otherwise, you'd have to say that our neurons themselves understand the meaning of the signals they are transferring. If you don't allow for a holistic phenomenon--a mind--then you must admit that the individual neurons which transmit the physical impact of a photon upon the retina KNOW that this photon represents a piece of the sun. That's an amazing neuron, you've got there.

Computers process data because that's how we design them. We explicitly trace out circuit boards so that electrical currents model logical patterns. There is a purpose in their design, and this purpose is explicitly meaningful. There is no purpose in our own design. Our brains just transfer one type of signal (light, for example) into another type of signal (electric). This transference in itself can't account for meaning. Electricity is no more meaningful than light. Turning one into another doesn't create meaning. Something else, something extra, is happening.
Malik23 wrote:
Irrational processes do not derive from computation.


Yes they do. It's called biased thinking or lack of thinking. All research in psychiatry and psychology points to the fact that there is a flawed logical process behind irrational thoughts (i.e. depression is characterised by biased processes). The brain receives sensory data, processes it based upon the nature of the data and a knowledge base. Basic cognitive science.
If irrational processes can derive from computation, I'd love to see those computations. How do you program a computer to have "flawed logical processes"???
To make artificial minds, one has to try to replicate consciousness. As far as I know, there is no scientific law against it.

Unless one believes that God created all life, you have to accept that eventually we will develop thinking machines. Evolution managed thinking organisms, yet so far there has not been a single reason cognitions reside only in 'organic structures'.
I've said many times here that I do think it will be possible some day to create conscious machines. But they won't be computers. Computers are mere child's toys compared to the machines that will eventually become conscious. This isn't an emotional argument I'm making. It is rooted in mathematics and logic (see the other thread for my reasoning on Godel's theorem).

Yes, evolution managed thinking organisms. But it didn't do it by making Turing machines. For us to think that we can build conscious, thinking machines without even knowing how nature managed it seems a much greater prejudice than what I'm being accused of. That would be like thinking we can build a flying machine without understanding anything about aerodynamics, lift, and drag--or how birds manage it.

Of course there's no scientific law against creating conscious machines. But there's a clear logical "law" that proves we can't create conscious machines by merely running algorithms.
Success will be my revenge -- DJT
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Malik23 wrote:I'm not suggesting a soul. I'm saying that information only becomes meaningful to a mind. . . . . This understanding is itself something extra, something more than the electrical signals. Otherwise, you'd have to say that our neurons themselves understand the meaning of the signals they are transferring. If you don't allow for a holistic phenomenon--a mind--then you must admit that the individual neurons which transmit the physical impact of a photon upon the retina KNOW that this photon represents a piece of the sun. That's an amazing neuron, you've got there.
First, I'll have to ask you to stop putting words into my mouth. :) I am not saying that the neuron is the answer. Have you missed my posts in this thread where I've said neural networks? :) Neurons are simple processors - they process chemical signals. But it's the dense, complex networks of them that produce the mind. You argue the holistic approach. But ultimately, it's neural networks firing away that creates the mind. It's not: network>cognitive processes>MIND, because any analysis will show that it's still the network. Simple proof comes from the fact that drugs affect the mind (because they affect chemical processes within and around neurons), or how brain damage alters behaviour.
Last edited by Loredoctor on Sat Sep 08, 2007 7:24 am, edited 2 times in total.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Malik23 wrote:I've said many times here that I do think it will be possible some day to create conscious machines. But they won't be computers. Computers are mere child's toys compared to the machines that will eventually become conscious. This isn't an emotional argument I'm making. It is rooted in mathematics and logic (see the other thread for my reasoning on Godel's theorem).
Ahhh, you must have missed my earlier post where I stated that AI will likely be created using artificial networks.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
Queeaqueg
The Gap Into Spam
Posts: 2508
Joined: Thu Nov 11, 2004 8:21 pm
Location: Somewhere

Post by Queeaqueg »

Malik23 wrote:
I'm not suggesting a soul. I'm saying that information only becomes meaningful to a mind. . . . . This understanding is itself something extra, something more than the electrical signals. Otherwise, you'd have to say that our neurons themselves understand the meaning of the signals they are transferring. If you don't allow for a holistic phenomenon--a mind--then you must admit that the individual neurons which transmit the physical impact of a photon upon the retina KNOW that this photon represents a piece of the sun. That's an amazing neuron, you've got there.


First, I'll have to ask you to stop putting words into my mouth. I am not saying that the neuron is the answer. Have you missed my posts in this thread where I've said neural networks? Neurons are simple processors - they process chemical signals. But it's the dense, complex networks of them that produce the mind. You argue the holistic approach. But ultimately, it's neural networks firing away that creates the mind. It's not: network>cognitive processes>MIND, because any analysis will show that it's still the network. Simple proof comes from the fact that drugs affect the mind (because they affect chemical processes within and around neurons), or how brain damage alters behaviour. The process is likely to be: network>cognitive processes/mind.
Don't worry, Malik, I agree with you... I think the Mind is something more.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19846
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Loremaster, thank you for sharing your ideas. I honestly didn't mean to put words in your mouth. Sorry if I misunderstood or mischaracterized your position.

I'm not sure how neural networks are any different from considering individual neurons, when it comes to the basic concepts we're discussing here. I'll need you to explain that one to me. I fail to see how complexity in electrical signals produces consciousness. It seems just another way to hide behind one's explanation. Question: "How is consciousness produced?" Answer: "It's complicated." :)
But ultimately, it's neural networks firing away that creates the mind.
This answer is too easy. It's easy to say, but impossible to prove. How do neural networks firing away create something immaterial, personal, subjective, holistic, empathetic, and intelligent? I don't get it. And I don't think anyone else does, either.
Simple proof comes from the fact that drugs affect the mind (because they affect chemical processes within and around neurons), or how brain damage alters behaviour.
This proves nothing other than the fact that consciousness comes from the brain. I'm with you that far. But it still doesn't show how consciousness arises. I don't think the neural activity is primary. In fact, I think neural activity is an effect of consciousness, not the cause. You mention drugs. What about general anaesthetics? Roger Penrose says:
Roger Penrose wrote: "An important avenue towards answering questions concerning the physical basis of consciousness come from an examination of precisely what it is that very specifically turns consciousness off. General anaesthetics have precisely this property--completely reversibly, if the concentrations are not too high-- and it is a remarkable fact that general anaesthesia can be induced by a large number of completely different substances that seem to have no chemical relationship with one another whatever. Included in the list of general anaesthetics are such chemically differnet substances as nitrous oxide, ether, chloroform, halothane, isofluorane and even the chemically inert gas xenon!

If it is not chemistry that is responsible for general anaesthesia, then what can it be that is responsible? There are other types of interaction that can take place between molecules, which are much weaker than chemical forces. One of these is referred to as the van der Waals force. The van der Waals force is a weak attraction between molecules which have electric dipole moments (the 'electric' equivalent of the magnetic dipole moments that measure the strength of ordinary magnets). . . . it has been suggested (Hameroff and Watt 1938) that general anaesthetics may act through the agency of their van der Waals interactions . . . which interfere with the normal switching actions of tubulin. As anaesthetic gases diffuse into individual nerve cells, their electric dipole properties (which need have little directly to do with their ordinary chemical properties) can thereby interrupt the actions of microtubules. . . . It is a strong possibility that the relevant proteins are the the tubulin dimers in neuronal microtubules--and that it is the consequent interruption of the functioning of microtubules that results in the loss of consciousness.

As support for suggestions that it is the cytoskeleton that is directly affected by general anaesthetics, it may be remarked that it is not only the 'higher animals' such as mammals or birds that are rendered immobile by these substances. A paramecium, an amoeba, or even green slime mould . .. is similarly affected by anaesthetics at about the same kind of concentration.

[W]hat the preceding arguments strongly suggest is that it is not just the neuronal organization of our brains that is important. The cytoskeletal underpinnings of those very neurons seem to be essential for consciousness to be present.
"
What do we gain by pushing consciousness back from neurons to smaller cytoskeletal structures within individual neurons? The fact that these structures are small enough that quantum effects can be maintained (macro-scale objects do not retain quantum effects; the quantum proxy waves get collapsed on these scales). And with quantum entanglement, cytoskeletons may perpetuate quantum effects on larger scales filling the entire brain--quantum coherence, implying that the brain may be a Bose-Einstein condensate.
Success will be my revenge -- DJT
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Malik23 wrote:Loremaster, thank you for sharing your ideas. I honestly didn't mean to put words in your mouth. Sorry if I misunderstood or mischaracterized your position.
And I apology if I seemed annoyed.
Malik23 wrote:I'm not sure how neural networks are any different from considering individual neurons, when it comes to the basic concepts we're discussing here. I'll need you to explain that one to me. I fail to see how complexity in electrical signals produces consciousness. It seems just another way to hide behind one's explanation. Question: "How is consciousness produced?" Answer: "It's complicated." :)
I guess if you take away the soul, you have to assume (BAD WORD!!) that networks are responsible. Kind of like saying the CPU processor in a pc is responsible for a program running. The soul would be the user. ;) But yes it is complicated, yet no one has produced an adequate answer - even neuropsychologist/neurologists, for that matter.
Malik23 wrote:This answer is too easy. It's easy to say, but impossible to prove. How do neural networks firing away create something immaterial, personal, subjective, holistic, empathetic, and intelligent? I don't get it. And I don't think anyone else does, either.
I'll have to find an article which discusses how artificial networks (not computers) are learning and behaving like us when it comes to acquiring language. If we accept that all that makes our personalities is within our bodies, that we don't have a soul, then I guess we have to assume that the mind is the product of the brain, and hence neural networks.

I think what you are referring to are 'Qualia facts' - that is, the quality of our senses. Why does a red rose mean something - the qualia of a red rose - when our senses detect colour, smell, shape, etc. What is it in our brains that appeciates it? How is it possible that neural networks understand or appreciate the rose? I don't know. That doesn't mean I am wrong (or right), nor does it prove that the mind is outside of the body. It's complicated. :lol:
Malik23 wrote:This proves nothing other than the fact that consciousness comes from the brain. I'm with you that far. But it still doesn't show how consciousness arises. I don't think the neural activity is primary. In fact, I think neural activity is an effect of consciousness, not the cause. You mention drugs. What about general anaesthetics? Roger Penrose says:
Roger Penrose wrote: "An important avenue towards answering questions concerning the physical basis of consciousness come from an examination of precisely what it is that very specifically turns consciousness off. General anaesthetics have precisely this property--completely reversibly, if the concentrations are not too high-- and it is a remarkable fact that general anaesthesia can be induced by a large number of completely different substances that seem to have no chemical relationship with one another whatever. Included in the list of general anaesthetics are such chemically differnet substances as nitrous oxide, ether, chloroform, halothane, isofluorane and even the chemically inert gas xenon!

If it is not chemistry that is responsible for general anaesthesia, then what can it be that is responsible? There are other types of interaction that can take place between molecules, which are much weaker than chemical forces. One of these is referred to as the van der Waals force. The van der Waals force is a weak attraction between molecules which have electric dipole moments (the 'electric' equivalent of the magnetic dipole moments that measure the strength of ordinary magnets). . . . it has been suggested (Hameroff and Watt 1938) that general anaesthetics may act through the agency of their van der Waals interactions . . . which interfere with the normal switching actions of tubulin. As anaesthetic gases diffuse into individual nerve cells, their electric dipole properties (which need have little directly to do with their ordinary chemical properties) can thereby interrupt the actions of microtubules. . . . It is a strong possibility that the relevant proteins are the the tubulin dimers in neuronal microtubules--and that it is the consequent interruption of the functioning of microtubules that results in the loss of consciousness.

As support for suggestions that it is the cytoskeleton that is directly affected by general anaesthetics, it may be remarked that it is not only the 'higher animals' such as mammals or birds that are rendered immobile by these substances. A paramecium, an amoeba, or even green slime mould . .. is similarly affected by anaesthetics at about the same kind of concentration.


A nice quote. :) It still shows that the mind is affected by something physical occuring. Maybe memories, instincts, etc are localised in networks, but the consciousness? Hmmm, a good question. Damn you, Malik! ;)

Malik23 wrote:What do we gain by pushing consciousness back from neurons to smaller cytoskeletal structures within individual neurons? The fact that these structures are small enough that quantum effects can be maintained (macro-scale objects do not retain quantum effects; the quantum proxy waves get collapsed on these scales). And with quantum entanglement, cytoskeletons may perpetuate quantum effects on larger scales filling the entire brain--quantum coherence.


I have to agree that the quantum world might play a massive role in consciousness. The funny thing is, I hope this might be the case. Maybe when we get quantum processors we might have thinking machines.

Great post, Malik.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
Zarathustra
The Gap Into Spam
Posts: 19846
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

From the link no one seemed to check out:
Wikipedia wrote:The Chinese Room argument is a thought experiment designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

Searle laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
Success will be my revenge -- DJT
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Malik23 wrote:From the link no one seemed to check out:
Wikipedia wrote:The Chinese Room argument is a thought experiment designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

Searle laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
I studied the Chinese room thought experiment at uni. But the thing is, ultimately how do we prove that there isn't a mind understanding the data? Nor can we prove that there is - no more than I can prove that elephants or dolphins, or even other humans, have one.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
emotional leper
The Gap Into Spam
Posts: 4787
Joined: Tue May 29, 2007 4:54 am
Location: Hell. I'm Living in Hell.

Post by emotional leper »

Malik23 wrote:From the link no one seemed to check out:
Wikipedia wrote:The Chinese Room argument is a thought experiment designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

Searle laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
The problem that the Chinese Room demonstrates clearly is that there are many things that combined create intelligence.

The only way I think we will ever arrive at something that most people would call 'Artifical Intelligence' is the way it was done in 'The Moon is a Harsh Mistress' -- by accident.

Moon is a Harsh Mistress Spoilers:
Spoiler
They start off with a machine that can accept orders in Loglan and English -- it understands both, and can 'learn/be programmed' to understand other languages. The machine is designed to supervise other machines -- to make decisions -- often based on very little/incomplete data. It has to make guesses. To facilitate making guesses, it keeps track of what it has done before, based on what data, and what the outcome was -- it has memory, and can compare future actions against past performance. Eventually, after being greatly enhanced, more memory, more processing capability, more storage, etc, one day he just 'wakes up.' But not in the dramatic 'Oh, hi there, world. I'm a machine," way, but in the fact that in a battery of 100 questions he deviates from the expected answer twice. Over the course of about a year or so, if I recall correctly, he develops a personality.
I believe, along with Heinlein, that any intelligence we create will be an emergant one. Just as an oyster is not self aware, a cat is somewhat self aware, a dog more-so, and, well, I don't know about you, but I am self aware, I believe that it will be a gradual process, which may not be possible to be repeated twice the same way.
B&
User avatar
emotional leper
The Gap Into Spam
Posts: 4787
Joined: Tue May 29, 2007 4:54 am
Location: Hell. I'm Living in Hell.

Post by emotional leper »

Loremaster wrote:
Malik23 wrote:From the link no one seemed to check out:
Wikipedia wrote:The Chinese Room argument is a thought experiment designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

Searle laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
I studied the Chinese room thought experiment at uni. But the thing is, ultimately how do we prove that there isn't a mind understanding the data? Nor can we prove that there is - no more than I can prove that elephants or dolphins, or even other humans, have one.
The problem with the Chinese room is the fact that the machine does more than it is stated to be able to do. If I built a machine that understood the rules of the English Language, that would be an accomplishment, but would not be something that would pass a Turing Test. I could simply ask it, "Which do you prefer, Football or American Football, and why?" and the machine would grind its gears to dust. To be able to pass a Turing test, a machine would have to know more than simple language rules. And when a machine gets to the point where it's able to fool a human into thinking it's a human, while I would not grant that machine rights based solely on that ability, one is close to being able to generate a real AI, due to the fact that the machine is able to display several of the aspects of intelligence as we humans define it.
B&
User avatar
Zarathustra
The Gap Into Spam
Posts: 19846
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Loremaster wrote:But the thing is, ultimately how do we prove that there isn't a mind understanding the data? Nor can we prove that there is - no more than I can prove that elephants or dolphins, or even other humans, have one.
True, we can't technically prove that other biological organisms have minds. Though I think there are many good reasons to assume that they do have minds, that's not the point. Not being able to prove other people or animals have minds doesn't mean that we must give computers just the same benefit of the doubt as we do our fellow creatures. We start out knowing that computers don't have minds. They don't right now. So the question is: at what point do we go from knowing that they don't, to giving them just as much belief as we give other humans?

My point is that nothing we can possibly do with Turing machines--universal computers--will ever, even in principle, give us reason to make this leap of faith. In fact, it would be irrational to do so, because those future computers will always, inevitably, do nothing more than what a calculator does: symbol manipulation according to rules. Algorithms. Computations. That's all they do, no matter if their construction is serial processing, parallel processing, processing with simulated "neural" networks, or any other physical technology you devise. That's because a computer doesn't compute due to the ingenuity of its physical hardware (computations can be performed on much simpler machines). It computes because of its logical structure. And that logical structure is the manipulation of symbols according to rules. The problem isn't that we haven't yet figured out the right rules to produce consciousness. The problem is that consciousness isn't produced with the manipulation of symbols according to logical rules.
Emotional Lepar wrote: I believe, along with Heinlein, that any intelligence we create will be an emergant one.
Penrose deals with the idea of an "emergent" consciousness developing from computer programs left to mutate, and selected by mathematical or logical versions of "natural selection." The basic problem I outlined above remains. Even programs that evolve will still evolve according algorithmic interactions. Evolutions of symbol manipulations will still only remain symbol manipulations, nothing more. But conscious understanding must be more, because our insight can penetrate beyond the limits of algorithmic systems (Godel's Theorem).
The problem with the Chinese room is the fact that the machine does more than it is stated to be able to do. If I built a machine that understood the rules of the English Language, that would be an accomplishment, but would not be something that would pass a Turing Test. I could simply ask it, "Which do you prefer, Football or American Football, and why?" and the machine would grind its gears to dust. To be able to pass a Turing test, a machine would have to know more than simple language rules. And when a machine gets to the point where it's able to fool a human into thinking it's a human, while I would not grant that machine rights based solely on that ability, one is close to being able to generate a real AI, due to the fact that the machine is able to display several of the aspects of intelligence as we humans define it.
While I think it would be exceedingly difficult to program a computer to pass the Turing Test against sufficiently clever humans, I think that it would be easy to program a computer to answer convincingly against most questions we tend to think up. For instance, the computer could easily be programmed to respond: "I don't know the difference between American Football and Football. I don't like sports." Faking ignorance and disinterest would cover a lot of holes in any potential database of answers you'd have to create. Humans are, after all, remarkably ignorant and indifferent about most issues.

I do not think any amount of convincing performance on a Turing Test demonstrates a nearing of the goal. This doesn't demonstrate programmers getting closer to creating consciously understanding entities. It only shows that they're getting better at using algorithms to fool humans. And without conscious understanding, it's not intelligence. It's just a simulation of intelligence.
Success will be my revenge -- DJT
User avatar
iQuestor
The Gap Into Spam
Posts: 2520
Joined: Thu May 11, 2006 12:20 am
Location: South of Disorder

Post by iQuestor »

This is a truly engaging topic. There are some really great posts here, and I have enjoyed reading them, and more importantly, it has opened my eyes to some possibilities and concepts I haven't had before.

great topic!

The chinese room argument doesn't seem to be complete, though as stated in the quote. Here is why I say that:

to break it down :

1: has to pass the turing test
2. Has to convince a chinese-speaking questioner that they are getting responses from another chinese speaker.
3. Substitute the processor with a non-chinese-speaking human using a character map and a set of rules to follow and achieve the same (albeit, slower) results, proving that it doesnt require true mastery of chinese language to programmatically simulate the skill.

1. If the computer passes the turing test, then whats left to be done? I mean, this test is supposed to demonstrate intelligence by responding intelligently to a hidden questioner over a textual interface. Therefore, all it has to do is speak chinese to meet goal #2; so, what's the point??

2. I agree with EL, who said that the so called computer that took in questions in chinese, and gave chinese answers based on a programmatic understanding of the language does not take into account the actual thought part of that process; That is a big omission. However, if you read the description, the stated goal of this portion was not to convince the questioner of its intelligence, but only of its ability to provide appropriate responses in chinese language. Again, if it previously passed the turing test, then why is this a big deal?

3. Again, this focuses on the ability to break down the admittedly complex Chinese language into an algorithm. It is showing a human can use a character map to simulate what the computer is doing. I guess it assumes the human doesnt use any cognition to generate responses, only follow a character map and set of rules to provide responses.

If the responses are in fluent chinese, then the test has been satisfied. However, having previously passed the turing test, I dont see why this point isn't moot.

Malik said:
I do not think any amount of convincing performance on a Turing Test demonstrates a nearing of the goal. This doesn't demonstrate programmers getting closer to creating consciously understanding entities. It only shows that they're getting better at using algorithms to fool humans. And without conscious understanding, it's not intelligence. It's just a simulation of intelligence.

Malik -- are you saying the Turing test isn't a valid test for machine intelligence? I dont think it is, because it can be passed with a sufficiently complex algorithm intended to fool a speaker, rather than meet some unbiased goal of proving intelligence that is beyond a human's judgement. I guess that is the hard part, though.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19846
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Queeaqueg wrote:
Don't worry, Malik, I agree with you... I think the Mind is something more.
Thanks for the support! I didn't mean to ignore your post. I spend way too much time arguing against other people, and sometimes forget to acknowledge those who aren't disagreeing with me. Thanks. I, too, think that mind is something "more."

That's not to say that I think mind is spiritual or another name for the soul. I do think it arises from physical origins. I just don't think we understand matter and material reality enough to say how this happens. I'm even open to the possibility that matter itself--the universe itself--is imbued with consciousness on its most fundamental levels. I don't necessarily believe that, but that's how far I'm willing to go to express my incredulity about our current understanding of matter.

Here's what I think is really going on. The universe has the ability to build up order on levels that far exceed what we'd expect from looking at the underlying levels. For instance, biological life in no way violates the laws of physics . . . and yet if you just looked at the laws of physics, you'd never be able to deduce the fact that biological organisms would come into being. The laws of chemistry build directly upon the laws of physics. And biological organisms appear directly out of the laws of chemistry. And out of biological organisms, there appears consciously intelligent beings.

No higher level of organization violates the lower levels, but neither do they derive out of those lower levels as a logical necessity. They build up by accident. And what's amazing about this is the fact that the order we see on biological levels has nothing to do with the order we see at the level of physics. Sure, a cheetah's motion must still be described using Newton's laws of motion. But there's a huge difference between a cheetah running and a rock rolling down a hill. The animal operates in a sphere of action that rocks just don't deal with. So while cheetah's don't violate Newton's laws, they still operate on a level of organization which involve factors that Newton's laws don't describe. So a cheetah chasing an antelope involves a sphere of action that you'd never be able to deduce purely from the laws of physics.

This layering of order-on-top-of-order is what I find amazing. And the higher levels--while not violating the lower levels--still produce things which you can't deduce purely from the lower levels. And yet in some fashion, the possibility of cheetahs (or minds) were inherent in the universe from the beginning.

We live in an amazing place. It's almost magical.
Success will be my revenge -- DJT
Post Reply

Return to “The Loresraat”