Malik23 wrote:Ok, we agree that brains didn't become conscious due to developing a sufficient level of computational ability. Given that agreement, perhaps you can see why I insist that there's no reason to expect a sufficiently complex computer should also develop this mysterious quality, when computation had nothing to do with the development of every single instance of consciousness in the history of our planet. It's never happened before, so there's no reason to expect it now. [And in fact, Penrose gives a good argument why it's impossible even in principle.]
Yes, I agree with all this. A computer is not the answer to AI because it can compute. But I think it is
possibly the answer because it can compute,
and do so many other things we can do. What do our brains do? Like I said before, they compute; remember; preceive; etc. What else can be made to do these things? Other than the brains of
other species, which we are not capable of changing - i.e., forcing them to evolve - computers is the best thing that comes to mind. We can
already make computers do the things I just listed, and already better than
we can do them. The more we learn about ourselves, the more things we can program computers to do. And maybe one day we'll have happened upon the (or a) combination that gives the computer awareness.
Or not. Just an idea.
Malik23 wrote:Fist and Faith wrote:But it's all the physical brain. Wire anyone up to sensors that detect brain activity, and we would not have any thought or feeling that didn't register as brain activity.
I have no idea how we are conscious, so all I can do is play devil's advocate here. What if our thoughts and consciousness arise due to quantum effects, and our brain is a particular kind of material which can amplify quantum effects to macro scales? What if our thoughts and consciousness "happen" in the same "realm" in which an electron exists prior to measuring it and collapsing its wave function? Sure, the measurement will appear on our instruments. But prior to the measurement, the electron existed in an entirely different way. So when we measure the activity of consciousness in the brain, we're doing something similar. We're looking at the
effect consciousness has upon matter, rather than the
cause of consciousness.
Is there anything to back up this theory? It seems like asking too much, imo. I'm not even sure how to word my objection... How the heck would my perception of chocolate ice cream near me translate down to the quantum level, at which point a bunch of quantum events would occur, which would translate back to macro-level, making my brain tell my hand to grab the ice cream? What I mean is - why would there be a correlation between quantum events and ice cream? What does the quantum reality care about macro stuff?
But I realize I may be completely misunderstanding your line of thought.
Malik23 wrote:If this were truly how we're built, there'd be no place for choice or freewill. Hell, you wouldn't even need consciousness. There'd be no reason for evolution to have produced this "illusory" quality, because our bodies would function exactly the same without it. Indeed--what external factor of natural selection could select the existence of a completely immaterial sensation of consciousness? If everything we do happens because of material causes in our brains, then consciousness is completely superfluous and unneeded. If we merely take in stimuli (input) and react according to physical rules hardwired in our brains (output), then we're already nothing more than computers . . . and consciousness still hasn't been explained. It sits completely outside of that loop.
I agree that free will is not illusory. The illusion is not easier to explain. And if free will is not necessary, why have an illusion of it? Seems silly to me. No, until I have reason to do otherwise, I'll assume it is what it seems to be.
Malik23 wrote:In addition, consciousness isn't merely "spread out" in space like a gas in the brain. Nor is it spread out like information in RAM. (Each bit of info in RAM has a memory address, a specific point in space.) Consciousness isn't spatial in the sense of being located in space or having a shape (e.g. "spread out"). It isn't a phenomenon that can be tracked through space.
I agree. What I'm suggesting is that consciousness is
not a specific function of the brain. Nor even something that can be found in any particular
area of the brain. I don't think consciousness is a phenomenon that looks at memory, perception, etc, and combines them in whatever ways it wants. I think consciousness
is the interaction of memory, perception, etc. Hey, I know
nothing about how the brain works, so I could be entirely wrong. And I don't even have a half-baked theory about
how these things came to interact at all. I just think many different abilities developed in the mass of goo that was the primitive brain; and that the right combination eventually came about, so that they became a unit in more ways than simply:
Perceive something in the environment --> Remember what happened last time that thing was perceived --> React.
They were now a unit that was
aware of the process. Then, they became a unit that was aware of
itself.
Malik23 wrote:Computers don't cause symbols to be put onto paper. Humans do. Computers + printers are just fancy pencils.

They are tools humans use to "write," i.e. manipulate symbols. Now whether you manipulate these symbols according to rules you store in your head, or with rules you store in a computer, it makes no difference. The rules came from us. The computer didn't invent them. Figuring out the rules is the calculation. Having the computer apply those rules is just like having our hand apply those rules. It's a mechanical process (that's why a machine can do it).
Symbols on paper are just as capable of performing calculations as a computer. Sure, the input device (pencil) and output device (paper) needs some instructions in order to interact correctly. But so does a computer. Its input and output won't make sense or perform any calculation without explicitly written rules. In both cases, those instructions come from us. Conscious beings. We told the computer how to manipulate the symbols, just like we "tell" the pencil and paper how to manipulate the symbols. Both machines are different ways to store these symbols. The only difference with computers is that we can also store the rules of how to manipulate the symbols in the same medium as the symbols themselves. Granted, that's an amazing difference. It frees us from having to constantly apply the rules ourselves. But since these rules are themselves only more symbols, they are no different from the symbols they manipulate. The "computation" which is happening in the computer is just another form of humans writing. It's like writing with dominos that you've pre-arranged to spell words. You don't actually write the words yourself, you just push the first domino. But since you're pre-arranged it to write a certain output through this mechanical process, it appears like the message is writing itself. Another way to say it: you could build a domino computer.
Nope, I don't buy it.

Write "2 + 2 =" on a piece of paper, leave the pencil right
on the piece of paper, and the two items will
never write the answer. Nor will they ever
calculate the answer, but be unable to write it.
I can program a computer to add whatever numbers I give it. And I can program it to display the answer in any number of ways, including writing it on a piece of paper.
I can even program it to find its
own addends, and add them up. Or I can give it the ability to do more than add. Heck, it can choose to count the number of leaves on a tree, count the number of trees, and calculate the number of leaves in the whole forest.
No, no computer would be able to do
anything without having been programmed by people. But once it
is programmed by people, it can do its task without our input. So we program it to be capable of more and more things.