I'm Murrin wrote:Define self awareness.
Well, that's part of the problem. It's like the Supreme Court's definition of porn: I know it when I see it. But the only one I can see is my own. So that gives little reassurance when faced with the problem of other minds. But I think reasoning by analogy, evolution, and (in this case) Occam's Razor works just fine to justify the belief that other
humans have minds and are self-conscious. Analogy doesn't work with machines.
But to get a little more rigorous, I'd define self-awareness as the willful, knowing control over one's own intentionality (i.e. the
directedness of consciousness upon objects of consciousness). That means one can direct his intentionality upon the phenomenon of his own intentionality. You can become aware of your awareness, and even aware of this awareness-of-your-awareness. However, at each stage there is always a sort of "bracketing" of the previous awareness to make it an object, and the "pure subject" itself is never fully grasped or viewed, because doing so makes it yet another object, which is still "viewed" by the subject, which remains distinct. Though this leads to an infinite regress, a kind of "dog chasing his own tail" type of mental state, the realization of this infinite regress is itself a kind of indirect awareness of the whole self.
So, obviously, this is much more than recognizing your physical self in a mirror. It is recognizing the phenomenon of your own directed attention, both the object and the subject which perceives the object, and the simultaneous distinction-and-union of these two as a phenomenal event. No robot does anything remotely similar to this, because they don't have inner lives or consciousness of which to be aware.
I'm Murrin wrote: A human mind is only a more complex computational machine.
If that were true, so many of us wouldn't struggle so hard to learn math. And we wouldn't make computational errors ... which computers don't do (if they err, it's because we programmed them incorrectly). Computers calculate by blindly following an algorithm. While we can mimic this behavior--and we'd have to, otherwise we couldn't design and program computers--this isn't necessarily how we do math (as Vraith and Hashi also point out). We understand that 2+2=4 due to apodictic certainty, not because we've been programmed to output this result by churning through an algorithm.
Back before computers, when the clock was the pinnacle of mechanical precision, it used to be our standard model for natural things that display order. We'd use it as an analogy to the universe, or to our minds. And now that we have computers, our analogy has switched to this newer symbol. But it's still just a symbol. Our minds aren't computational machines, no more than weather is a mathematical simulation, or landscape is a map.
I'm Murrin wrote:To be clear, I'm referring to the ability not only to recognise its own shape, but to distinguish between itself in a mirror and other identical machines. That's exactly how they test for self-awareness in animals.
But you're making an analogy between a biological, living organism which is the product of self-organizing principles of evolution, to a manufactured artifact. The same conclusions can't be drawn between the two, no more than we can use the order of the universe to prove that there is a "Grand Watch Maker" who designed it. You might as well say that the people in movies are living, conscious beings merely because they look like it onscreen. Or the characters in books are independent, conscious beings because they feel so real. There has to be a difference between simulation and reality, otherwise these words have no meaning and you could just as easily conclude that our own self-awareness is merely a simulation. Philosophical zombies.