Artificial Intelligence

Technology, computers, sciences, mysteries and phenomena of all kinds, etc., etc. all here at The Loresraat!!

Moderator: Vraith

User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

I've touched on this elsewhere, but might as well make it official in an AI thread.

Roger Penrose presented the most devastating argument against hard AI, or functionalism, in his book: SHADOWS OF THE MIND. Hard AI is the belief that all thinking is computation, and that even feelings of consciousness arise through carrying out appropriate computations. When you put it like that, it sounds a little ridiculous. But that would have to actually be the case if we can make computers conscious, thinking machines.

Penrose then goes on to show that what we do when we think is clearly not computation. Even mathematical understanding isn't computation. It's one thing to claim this, but he actually proves it with Godel's theorem. Godel's theorem itself proves that no mathematical or formal system (like logic) can ever be complete. In other words, we can show that there will always be true statements within a formal system which are themselves not provable within that formal system. The reason we can do this is because we can understand what the symbols mean. When we think and construct proofs, we can "step outside" of the proof and understand its implications. And for this particular proof (Godel's theorem), our ability to understand is especially relevant. A computer (or Universal Turing machine) does nothing but manipulate symbols. That's what computation is. Computers manipulate symbols from within the context of a set of rules (the program or software) which constitute a formal system. And according to Godel's proof, no formal system can ever prove every true statement that's possible to construct within that formal system--even if they ran forever. It's not a problem of infinite sets of statements. It's a problem of "viewing" the formal system from the "inside." Computers only compute. They don't think. No matter how complex their output is, they can't never understand their own output.

Our brains are doing something different.
Last edited by Zarathustra on Thu Jul 26, 2007 1:24 am, edited 1 time in total.
My revenge will be success -- DJT
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

I agree with that conclusion, but not the reasoning.

One thing that computers need to be conscious is senses. Maybe sight and sound, maybe other things. And we have already demonstrated that computers can recognize faces and understand language.

When such senses are at a computer-minds disposal, the computer would start to build a list of things it recognizes, things it knows. These things become the basis of the symbols that Penrose sees as missing.

How? Well, all of us understand symbols, but how? By language and by recognizing things happening. When we see someone running, and use language to say that that is called "running", we can begin to understand symbols.

That is, after all, how we learn them.
.
User avatar
Prebe
The Gap Into Spam
Posts: 7926
Joined: Mon Aug 08, 2005 7:19 pm
Location: People's Republic of Denmark

Post by Prebe »

I'm not sure what to think about this, but let me throw in a few parameters:

1: Continuous variable calculation (as opposed to discrete)
2: "Run time" addition of new columns to computational models.
"I would have gone to the thesaurus for a more erudite word."
-Hashi Lebwohl
User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

Wayfriend, senses aren't the issue. A computer transcribes any input into a sting of 0s and 1s. We don't. Once it transcribes this input into 0s and 1s, that input becomes a string of symbols just like anything else it does. Human perception isn't computation. Consciousness isn't computation. But computers can only compute.

Besides, perception has nothing to do with mathematical insight. Any computer's actions are always nothing more than a set of computations. Yet, Godel's theorem *proves* that no computational system can be complete. In other words, no computer could ever prove Godel's theorem, by definition. Yet, us humans can. Therefore, by definition, humans are doing something that computers could never do, not even in principle. Not even if they were infinitely fast and powerful.

Prebe, I don't know enough about your terminology to comment. Please explain.
My revenge will be success -- DJT
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61952
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 19 times
Been thanked: 29 times
Contact:

Post by Avatar »

Interesting idea. Not sure yet what I think. In general, I've tended to assume AI would eventually be inevitable.

Somewhere, maybe even this thread, I talk about Go (the ancient board game) as being part of the search for AI. As things stand, there isn't a Go program in existence that can beat even a practised human amatuer.

A standing prize of 1 million dollars is on offer to the creator of a program that can beat a reasonably skilled human player.

Something to do with it being so far impossible to get the computers to perform effective strategic pattern recognition. (Knowing, in other words, that an unconnected series of placements has the potential to combine into a stable or strong shape that will influence or even dominate the board.)

But what I get from your earlier post Malik, is that because the computer doesn't know it's trapped in a limited system and can't think, it can't alter its own system to adapt to new input?

--A
User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

Av, it's more than the computer not knowing it's trapped in a limited system and can't alter its own system to adapt to new input. I'm not doing a good job of explaining it, because the concepts involved are extremely complex. Have you ever read up on Godel's Theorem?


Back around the turn of the 20th century, the great mathematician David Hilbert challenged the world's mathematicians to formalize all of mathematics. He wanted to find a system that once and for all reduced all mathematical reasoning to a set of algorithms and axioms which could be checked mechanically. The view that this is possible is called Formalism. He wanted to provide a rigorous, logical ground for all mathematics. But Godel's proof showed that this is impossible.

And because it is theoretically impossible, this is why it is impossible for a machine which does nothing but run axioms according to algorithms (computation) to do what humans do, because a human (like Godel or you or me) can show how it's impossible, while the computer itself cannot. In order for the computer to show Godel's proof, it would have to do something else besides pure computation, because computation is exactly what is shown to be limited by his Theorem.

Godel's Theorem--wikipedia
What Gödel showed is that in most cases, such as in number theory or real analysis, you can never create a complete and consistent finite list of axioms, or even an infinite list that can be produced by a computer program. Each time you add a statement as an axiom, there will always be other true statements that still cannot be proved as true, even with the new axiom. Furthermore if the system can prove that it is consistent, then it is inconsistent.

It is possible to have a complete and consistent list of axioms that cannot be produced by a computer program (that is, the list is not computably enumerable). For example, one might take all true statements about the natural numbers to be axioms (and no false statements). But then there is no mechanical way to decide, given a statement about the natural numbers, whether it is an axiom or not.
These are insights which cannot be made from within the axiomatic system itself. You can't use computations to come to these conclusions, by definition. So the fact that we can come to these conclusions about the nature of math and computation itself proves that mathematical understanding, for humans, is something other than computation.

Does this mean that humans will never be able to build conscious machines? Not necessarily. But it does mean that those machines will have to do something other than mere computation and the processing of algorithms. They won't be "computers," in other words. A Turing machine won't ever be conscious the way we are.

I believe our brains are organic "machines." I don't believe in a soul or spirit. But our brains are doing something different from algorithmic computation--even when we do algorithmic computations, because superimposed upon these mechanical processes that we do, is also the understanding of what we're doing, the consciousness of what we're doing. It just like lifting my arm. I can build a robot that lifts its arm. But the robot isn't conscious of the fact that it is lifting its arm. That missing ingredient--conscious understanding--cannot be written in a mathematical formula or computer program. Which numbers equal consciousness?

Consciousness is something different from number crunching. It is a self-referential quality. In many ways, it mirrors quantum states. Thus, I believe our brain somehow magnifies quantum states to a macro scale. In order to get computers to mimic this, we'll have to go a lot deeper than processing 0s and 1s. We'll have to build quantum computers that access quantum states. At least, that's a theoretically plausible way to build conscious machines. Who knows if it would actually work, or if this is what makes our brains conscious. But that's my hunch.
My revenge will be success -- DJT
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61952
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 19 times
Been thanked: 29 times
Contact:

Post by Avatar »

Good post Malik. I'm gonna have to read it a couple of times though.

I will just say that while I agree that consciousness isn't just number-crunching, we don't know the criteria of it. Isn't it possible that once a certain number of connections have been made, consciousness can arise by itself?

--A
User avatar
Prebe
The Gap Into Spam
Posts: 7926
Joined: Mon Aug 08, 2005 7:19 pm
Location: People's Republic of Denmark

Post by Prebe »

Malik:
Continuous variable calculation: Not having the state of each "neuron" expressed as a 1 or a 0, but as a probabilty (real numbers between 0 and 1) that it will fire, which would compare more closely to the brain.

Adding of collumns at run-time: The formation of new connections between neurons and even addition of neurons while the machine is running. This would emulate the brain functions quite closely.

Instinctively (and this is only my feeling) I think that the only thing that sets the brain (and not just the human one) appart from a machine is the existence of desire (run down into the basics: the desire to procreate).
"I would have gone to the thesaurus for a more erudite word."
-Hashi Lebwohl
User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

So you believe thinking is identical to computation? You don't believe there's anything non-computational about brain function?
My revenge will be success -- DJT
User avatar
I'm Murrin
Are you?
Posts: 15840
Joined: Tue Apr 08, 2003 1:09 pm
Location: North East, UK
Contact:

Post by I'm Murrin »

You input stimuli and get out a response, the response being dictated by the structure and chemical composition of the brain matter. It's computation, and differs only in the level of complexity of the system.
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Murrin wrote:You input stimuli and get out a response, the response being dictated by the structure and chemical composition of the brain matter. It's computation, and differs only in the level of complexity of the system.
Well said. Computers process 0s and 1s, we don't. But we process in parallel just like computers. But the fact of the matter is, we compute 'data'.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
Prebe
The Gap Into Spam
Posts: 7926
Joined: Mon Aug 08, 2005 7:19 pm
Location: People's Republic of Denmark

Post by Prebe »

Loremaster wrote:Well said. Computers process 0s and 1s, we don't. But we process in parallel just like computers. But the fact of the matter is, we compute 'data'.
Well said indeed. If one has to make a case against the similarity is has to be the finity of the binary state that Loremaster is mentioning. Replacing semi-conducting units with chemical cells (similar to neurons) would weed out that difference.

Besides, it's only partly true that there is a difference. A neuron either fires or does not fire an action potential.

Once again (and to day I'm sober) let me throw in some food for thought: Immagine that we could remove the brain from the body and keep it working with, say, one input: hearing. Would it still differ significanly from computers?

As I have already indicated that I think that what sets the computer appart from the brain is that the brain is hooked up to (and depending on) a body that due to a variety of desires and needs introduces so much chaos and disorder into the system that it seems unpredictable, which it - for all practical purposes - is.
Last edited by Prebe on Sat Jul 28, 2007 7:15 am, edited 3 times in total.
"I would have gone to the thesaurus for a more erudite word."
-Hashi Lebwohl
User avatar
Loredoctor
Lord
Posts: 18609
Joined: Sun Jul 14, 2002 11:35 pm
Location: Melbourne, Victoria
Contact:

Post by Loredoctor »

Prebe wrote:Besides, it only partly true that there is a difference. A neuron either fires or does not fire an action potential.
Great point.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
User avatar
[Syl]
Unfettered One
Posts: 13021
Joined: Sat Oct 26, 2002 12:36 am
Has thanked: 2 times
Been thanked: 1 time

Post by [Syl] »

Would you give up your immortality to ensure the success of a posthuman world?
Minsky's talk, "Matter, Mind and Models," dealt with how he thinks the field of artificial intelligence (AI) went off track. He blamed "physics envy" on the part of AI researchers who sought some simple set of principles that would underlie and explain intelligence. This strategy failed, but researchers made a lot of progress in "narrow" AI. Minsky argued that human brains have a lot of different "ways to think" so that if one way doesn't work or solve the problem, it doesn't get stuck. Brains can split problems into parts, simplify, make analogies, and so forth. Current AI programs generally rely on just one main strategy and therefore tend to get stuck. In addition, Minsky claimed that the evolutionarily recent parts of the human brain recognize patterns of activity in other parts of the brain. In particular, those parts of the brain recognize when other parts are trying to solve problems. The brain can reflect on its own activities. Reflection is the missing ingredient in narrow AI research-reinforcement learning networks, rule-bases systems, neural networks, and statistical inference.
"It is not the literal past that rules us, save, possibly, in a biological sense. It is images of the past. Each new historical era mirrors itself in the picture and active mythology of its past or of a past borrowed from other cultures. It tests its sense of identity, of regress or new achievement against that past.”
-George Steiner
User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

So all of you are saying consciousness is a particular computation. I suppose we could write down this computation as a particular formula or algorithm. And all it takes to create consciousness is to make bits of matter "move around" in a way that mirrors the structure of this formula. Thus, the material one uses to create consciousness doesn't matter. You could do it with Tinker toys, if you had enough of them. Or paper clips. All it would take is some way to switch them from "on" to "off" in order to process the 0s and 1s necessary to carry out the computation. So we could have a person for each paper clip who turns them either vertical (assign that "zero") or horizontal (assign that "one") according to this "consciousness algorithm." There's absolutely nothing about this system of paper clips and humans turners that would keep it from calculating the consciousness algorithm. It might be slower and bigger than electric circuits in a computer, but all that matters is the processing of information according to the algorithm.

So according to the argument you guys are making, this system of paper clips and human turners would necessarily be conscious, because it physically instantiates this computation.

This also means that we don't need to build more complicated computers than we have today. All we have to do is figure out this particular computation that equals consciousness, and run it on our computers. Computers can already run computations much faster than we can, so there's no need to build faster or more complicated computers. We just don't have the right software yet. Even if it took today's computers 100 years to run the algorithm, you'd still have to say that this computer was conscious, because the speed of one's consciousness is irrelevant. They'd just "perceive" time going much faster than we do.

But if consciousness is nothing more than this computation, then why do we have to build a machine to perform it at all? Why isn't it good enough just to write it down on a piece of paper? As far as computations go, there is absolutely no difference between one performed on a computer, and one performed by a mathematician writing it down on a piece of paper. "1+1=2" is exactly the same whether I write it, or a computer computes it. When do computational procedures become different enough from their written, paper counterparts to achieve this mysterious quality of consciousness? How can complexity itself ever distinguish written computations from those carried out by some physical mechanism, if the computation is all that matters?
My revenge will be success -- DJT
User avatar
I'm Murrin
Are you?
Posts: 15840
Joined: Tue Apr 08, 2003 1:09 pm
Location: North East, UK
Contact:

Post by I'm Murrin »

Because there's a difference between a process and the symbolic representation of that process?
User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

But the "process" is nothing more than a symbolic representation. Running electric through wires doesn't make anything conscious. It's the particular symbolic representation that (according to you guys) produces consciousness.
My revenge will be success -- DJT
User avatar
I'm Murrin
Are you?
Posts: 15840
Joined: Tue Apr 08, 2003 1:09 pm
Location: North East, UK
Contact:

Post by I'm Murrin »

Your point about writing down a formula makes no sense. It's like saying that by writing down an equation of motion something would actually be moved. There is nothing inherent in the shape of the system alone--it is what the system does when activated and fed stimuli that matters.
Consciousness is a by-product of the process, not a by-product of the computation. And with consciousness, it's the chaos in the system that matters.
That's why I don't think a conventional computer could ever be made sentient. An inherent chaos in the way the computer is programmed would have to be introduced. (There are already people who try to do this, both with real components and through simulation--taking the building blocks of computers and putting them together in 'random' combinations until by chance a particular response to input is found; beyond that, they take these--usually very interesting, since sometimes it's hard to work out why the device works--first devices and begin combining them, blending processes, increasing the complexity. It's a simulacrum for organic evolution, and closer to how our brains work than any other kind of computer.)
User avatar
Zarathustra
The Gap Into Spam
Posts: 19711
Joined: Tue Jan 04, 2005 12:23 am
Been thanked: 1 time

Post by Zarathustra »

Murrin wrote:Your point about writing down a formula makes no sense. It's like saying that by writing down an equation of motion something would actually be moved.
And yet you're saying that a simulation of consciousness is the same thing as consciousness, aren't you? That's the whole point. This is the problem you get into when you call consciousness a computation. Computations don't change depending on how you perform them. My example was supposed to make no sense, because I believe it makes no sense to say that consciousness is a computation, or that the simulation of consciousness is the same as consciousness itself.
Consciousness is a by-product of the process, not a by-product of the computation. And with consciousness, it's the chaos in the system that matters.
Okay, I like where you're going there. It sounds like you're saying that consciousness is not a computation, and that chaos (or at least non-computational processes) are involved in its production. I'd agree with that. And that's why you can never produce consciousness with a computer--because computers can only run algorithms (computations).
That's why I don't think a conventional computer could ever be made sentient.
Bingo. We're on the same page here.
An inherent chaos in the way the computer is programmed would have to be introduced.
No, chaos can't be produced with an algorithm. That inherently contradicts the definition of chaos. If a system can be algorithmically compressed, if you can find an algorithm to describe it, it's not chaotic.
My revenge will be success -- DJT
User avatar
I'm Murrin
Are you?
Posts: 15840
Joined: Tue Apr 08, 2003 1:09 pm
Location: North East, UK
Contact:

Post by I'm Murrin »

Algorithms are too formalised, too logic driven.
What I mean is that an element of random chance in the placement of transistors in the computer would allow for interesting effects that can't be predicted. However, that means a hell of a lot of trial and error, so practical application of the idea is difficult for two reasons: 1) trial and error by hand would take longer than anyone would ever have the time for to produce results, and 2) simulating it by computer, while getting you further along initially, would be inherently limited because of it necessarily being a traditional, logic-based computer, and so incapable of operating in--or simulating operation in--the ways the 'organic' system would. In other words, though some people believe that that would be the best chance of achieving the desired result, it's pretty much impossible to accomplish.
Post Reply

Return to “The Loresraat”