Wayfriend wrote:It's not "simulating walking", it's walking. (I'm not talking about the awareness of walking, I'm talking about the physicality of walking.)
You're right. But that's an external action. We're talking about the assumption of an inner quality (consciousness) based solely on the external appearance of a computer's output. That's completely different from walking. There is no inner component to physical or mechanical actions.
Wayfriend wrote:
. . . we don't know how we think! So at this time what we are doing is copying . . . So of COURSE it looks like we can only simulate, and never actually create. We know almost nothing yet about what we are trying to create.
I agree completely. I'm not saying that we won't ever build conscious machines. It just won't be until we understand how our own consciousness arises from matter. And those machines will NOT be classical computers.
iQuestor wrote:I beleive thought and consciousness are metaprocesses, if you will, of brain mechanics. Their form and structure lie above and between the synapses, emerge from the collective biomechanisms that govern instinct and sensory controls and everything else our grey matter does for us.
I agree. I think consciousness is a holistic phenomenon. But not only is the sum greater than the parts, I don't think we even understand the parts. It's more than just neurons firing electrical signals. The Penrose book I keep mentioning, SHADOWS OF THE MIND, talks about cytoskeletons--structures smaller than neurons that have organizations based on individual molecules. These structures are small enough to retain quantum effects. Computers rely upon classical physics. But consciousness behaves a lot more like a quantum phenomenon.
Wayfriend wrote:
Walking is an abstract.
How are physical, external actions abstract?
Cail wrote:
We're using our bias to define what thinking, consciousness, and intelligence mean.
Of course we are. What's wrong with that? That's like saying we're using our own sun to study how stars generate light and heat. Why wouldn't we look at the only things we know are conscious in order to determine what consciousness is? On the other hand, why would we
assume that AI computers were conscious when they have nothing in common with the one example of consciousness we know so well? I'm not saying that we should let our bias blind us to dismissing the issue of consciousness within a nonhuman entity. I'm saying that we shouldn't let our fear of making a bias-based mistake drive us to give machines the benefit of the doubt simply so we don't appear closed-minded. That's not a good enough reason to attribute consciousness to a machine.
Loremaster wrote:
Our neural networks compute - process - data.
Our neural networks may be capable of carrying the computations we perform in our thoughts, but computations isn't what neurons
do. Sensory input isn't data. Data is pure information. Data is the abstract formalization of input into binary numbers. A photon striking my retina isn't data. The electrical impulses which register this impact aren't data, either. Nowhere in our neurons is physical input translated into information. That is done at a higher level than the neurons themselves. That is done in our
mind, our
thoughts, our
understanding.
But turning sensory input into formalized information is just one of the things we do with our consciousness. Most of what we think, feel, and understand has nothing whatsoever to do with computation. You're just assuming that our brains act like computers. But love isn't built out of computations, no matter what is happening on the neuron level. And freewill isn't, either. Irrational processes do not derive from computation.
Fist and Faith wrote:
It is difficult for us to even discuss this stuff, much less create AI, because of definitions. I am speaking of thinking-machines. I do not consider calculators to be intelligent to the slightest degree. The issue I'm talking about is creating something that can make decisions; has opinions; has free will (And to make the discussion even more difficult, Loremaster does not think free will exists); recognizes its own existence - then denying it the freedom to act on those abilities. This creation of ours need not be able to do complex calculations, have a perfect and/or huge memory, or any of the things we usually think of when we think of computers. If we don't want to call it AI, that's ok with me.
Can you give a fairly comprehensive definition of the kind of intelligence you are talking about?
I agree that definitions are a large part of the problem. People have been hearing the term "artificial intelligence" for so long, and reading about it in fiction, that they've started to believe that this term means computers will be able to think like we do--or think at all. AI has never been about making conscious machines, because we don't have the slightest clue how to produce consciousness. No, AI has always been about mimicking human actions, the
output of our conscious thought. Creating machines which can respond "intelligently" to their environment is a completely separate issue from creating a machine which has a mind. A mind is something which can NEVER manifest itself in external action. Subjectivity itself can never be externalized.
If you're talking about creating something that has opinions and freewill, then you're not talking about AI. We have no idea how to create a machine with opinions. No one is even working on that problem. (Why on earth would you need a machine with opinions?) Nor are we working on creating machines with freewill. Freewill is different from simply making decisions. As I said, our computer software already makes decisions all the time. My computer does tons of stuff that I don't tell it to do. It monitors programs which attempt to use the Internet, and then decides whether or not to let them access it. But this decision process isn't freewill. Nor can any algorithm compose freewill--because then it wouldn't be free. Freewill allows us to act irrationally. Computers can't act irrationally unless they are malfunctioning. Freewill isn't a malfunction.
My "comprehensive" definition of intelligence began with my Einstein example. It includes insight and understanding. The ability to perceive the inner workings of our world, to peel back the layers of appearance and prejudice to see deeper truths. Today, my 6-yr-old asked, "Why are there sewers underground?" I think that the ability to ask questions, to be curious, to challenge one's conception of reality--that's intelligence. It has nothing to do with blind mechanical processes. And that's why blind mechanical processes aren't intelligent. They can be rational. But rationality isn't intelligence. Rationality is merely a tool we use. Rationality can be employed without consciousness at all. Thus, intelligence requires consciousness. It is something that only conscious creatures can acquire (though they don't have to acquire it--many conscious creatures aren't intelligent). While consciousness is a prerequisite for intelligence, the
appearance of intelligence doesn't itself imply consciousness. But appearance of intelligence is all that computers can manage--unless you build consciousness in them from the beginning. And we don't have a clue how to do that.