Page 3 of 5
Posted: Tue Apr 09, 2013 4:58 am
by Avatar
peter wrote:A computer will be self-aware at the point that it is able to make us believe that it is self-aware.
And if it was self-aware, why would it want to do that?
I think an important question is not how we define self-awareness, but what causes it.
--A
Posted: Tue Apr 09, 2013 9:07 am
by peter
Mmm.....one of the facets of self-awareness is the vanity to require that you are recognised as such by other beings that (you deem) also likely to be so. One reason you might try to alert your alien abductees to the fact as they approach you with their vivisection instuments - somehow (you think) it makes you more important.

[As I get older alas I'm begining to think more and more that this might be a fallacy]
Posted: Tue Apr 09, 2013 12:21 pm
by ussusimiel
One of the interesting things about the AI debate is that it will more than likely eventually be settled by the creation of a self-aware machine. Because of that I am slow to sell myself to hostages to fortune

(I hate being proven wrong!)
However, a couple of obstacles present themselves when I think about the subject. The first is the problem of emotions, which, IMO, is directly related to sensation and feeling. For me, one of the defining things about life (at the moment) is that it is all organic. While I think that powerful computer programs can simulate life (the Turing Test) that is a whole order (or ten) from actually being alive.
The second follows from the first because I am not at all sure that self-awareness can arise from anything other than life. I can imagine a computer becoming self-aware (I've read it in numerous sci-fi novels) but when I think about it more rationally it seems to me to be an almost pure 'fantasy'. (Lots of novels talk about the risk of the Singularity that is present when machine consciousness arises and I think that this is a recognition of the absence of life in such circumstances).
u.
P.S. This topic might be worthy of a Close thread of its own.
Posted: Tue Apr 09, 2013 1:33 pm
by I'm Murrin
Avatar wrote:peter wrote:A computer will be self-aware at the point that it is able to make us believe that it is self-aware.
And if it was self-aware, why would it want to do that?
I think an important question is not how we define self-awareness, but what causes it.
--A
There are a number of animals out there that are self-aware but not intelligent enough to deliberately show/hide it. Technically, we already have self aware machines, too (have you seen the robot that looks like a toddler, can recognise itself and others of the same kind, and communicates through dance movements that it invents on its own? Wish I could remember what it's called to find the video).
I think maybe in "self-awareness" you're discussing the wrong thing.
Posted: Tue Apr 09, 2013 2:18 pm
by Zarathustra
peter wrote:A computer will be self-aware at the point that it is able to make us believe that it is self-aware. To introduce the idea that it is a 'zombie' mimicing self-awareness is to introduce complication where none is required and by the maxim of Occams Razor (which has been vigorously defended in another place) must be ruthlessly struck out!

I disagree. There's a difference between a designed, artificial machine and an organism which has evolved through natural, purposeless processes. We wouldn't expect nature to produce animals that mimic self-awareness if they didn't actually have self-awareness, because this is superfluous complication which doesn't confer survival advantages to the organism itself, but instead produces an illusion for an unknown, unspecified audience. In fact, assuming that there is no intelligent designer guiding evolution, it seems much more difficult for nature to produce this illusion of self-awareness than self-awareness itself. Whom was nature trying to fool? What was the "intended" audience for this performance, if not other self-aware beings? It's like a painting done by blind artists, for blind viewers. It makes no sense, and requires an unlikely and external ordering mechanism to make the paintings look like anything.
However, with computers we have precisely this missing ingredient (us) which would make the
simulation of self-awareness far from superfluous or unlikely. In fact, we're currently trying to design it, so it's not unlikely at all. We have no idea how to design self-awareness itself, but we do have ideas how to mimic it. So we attempt that. And every stage up to the point of being convincing (e.g. passing the Turing Test) could be fairly described as mimicry without violating Occam's Razor. There is no need to suppose self-awareness for a computer that demonstrates no signs of self-awareness.
So if we take your point as true, then we'd have to ask ourselves what changes in that progression from not-convincing to convincing, that would demand that we suddenly discount our knowledge that self-awareness mimicry was indeed being attempted, and assume that this attempt is no longer as important as the apparent result? That's an abuse of Occam's Razor reasoning, to ignore known complications for the illusion of simplicity.
We'd know that computers are self-aware when we know what makes things self-aware, and then apply those mechanisms to build a "machine" similar to our own brains. You wouldn't need a performance or a Turing Test if you actually knew what produces consciousness (as Avatar has said, as well).
Murrin wrote:
Technically, we already have self aware machines, too (have you seen the robot that looks like a toddler, can recognise itself and others of the same kind, and communicates through dance movements that it invents on its own? Wish I could remember what it's called to find the video).
No, technically, we have machines that are self-referential. That's not the same as being self-aware, no more than this sentence being aware, merely because I've constructed it to refer to itself.
Posted: Tue Apr 09, 2013 5:43 pm
by I'm Murrin
Define self awareness. A human mind is only a more complex computational machine.
To be clear, I'm referring to the ability not only to recognise its own shape, but to distinguish between itself in a mirror and other identical machines. That's exactly how they test for self-awareness in animals.
Posted: Tue Apr 09, 2013 6:45 pm
by SerScot
Murrin,
We're far more than that as shown by the fact that we can discuss whether or not, ponder whether or not, consciousness exists at all.
I would very much like to believe that I'm more than a facilitator for those interesting little self replecating molecules in my body. And that my ability to ponder these questions indicate more than an odd mutation that doesn't add much to the universe at large.
Posted: Tue Apr 09, 2013 7:14 pm
by ussusimiel
SerScot wrote:We're far more than that as shown by the fact that we can discuss whether or not, ponder whether or not, consciousness exists at all.
I agree. This is why I mentioned the organic basis of life as we know it at the moment. We are still far from understanding the mysteries of matter and, IMO, this will be an essential step in eventually fully understanding consciousness (and then, maybe, replicating it).
And I also reckon that it is 'consciousness' rather than 'self-awareness' that we are actually discussing (which is why the Close is the correct place for this thread

)
u.
Posted: Tue Apr 09, 2013 8:08 pm
by Vraith
In a number of ways I'm going to agree with much of the stuff in Z's post...[especially since he said, perhaps more clearly than I, that creating a seemingly real illusion of real is a fuckload more difficult and silly, in nature, than a real real even if that real is a step removed by the demands of subjectivity] though I'm not sure where I go with it will be approved of by said Z.
We are trying to build [the AI folk, anyway] by coding...which is top-down determination...without knowing what it IS, only what it appears to do.
Ignoring the events/evolutions that led to it. Ignoring its interactive nature. Among other things.
I mean we can build...hell at this point, any reasonably savvy 6 year old can build...a calculator that is better, faster, stronger at mathematical calculations than any un-enhanced human [yes, even those folk like "rainman"] will ever POSSIBLY be.
BUT: how we use math, how we invented/discovered math, how we even "do" arithmetic [those that CAN "do" it...] is, as far as we can tell, different in every possible way from how the "machine" does the same function. It merely EXECUTES. Mathematicians and coders UNDERSTAND.
Self-awareness, by itself, is not enough...and self-awareness is not definable as one specific trait that can be isolated. It is a web of many.
Abusing grammar for a second, it is a kind of nexuum. The genitive plural of nexus...except in this..hah..."case"...applied to action, to verbality, instead of nouns.
[YES, there are much clearer and more accurate ways of saying that...I don't care, I don't feel like being that academic, and besides I like it that way]
As I've said elsewhere I think, even very primitive lifeforms "know" the difference between self and food.
That is a "kind" of self-aware.
It scales, with tipping points, heading higher.
But "other-awareness" becomes essential at some point. Empathy. The ability to model outside and including the self. Conceptual machines, "what if's?" Verbal Nexuum. The ability to teach and learn AND, more importantly to teach teaching and to learn learning.
In general, few are pursuing that...but some are. Mostly neuro folk with comp/robot folk. And I believe we are really only one particular insight, and very few years, from the transformation.
We're are in/near an Einstein moment. The weight and data of what we know...things that explain everything in one way, but make everything absurd in another...is crushing everyone investigating...until a diamond mind, a seed-crystal idea, metamorphs it all.
Posted: Tue Apr 09, 2013 8:36 pm
by Hashi Lebwohl
Vraith wrote:We are trying to build [the AI folk, anyway] by coding...which is top-down determination...without knowing what it IS, only what it appears to do.
Ignoring the events/evolutions that led to it. Ignoring its interactive nature. Among other things.
I mean we can build...hell at this point, any reasonably savvy 6 year old can build...a calculator that is better, faster, stronger at mathematical calculations than any un-enhanced human [yes, even those folk like "rainman"] will ever POSSIBLY be.
BUT: how we use math, how we invented/discovered math, how we even "do" arithmetic [those that CAN "do" it...] is, as far as we can tell, different in every possible way from how the "machine" does the same function. It merely EXECUTES. Mathematicians and coders UNDERSTAND.
I can address this point, since I had a project once to design a calculator that would accept numbers of any size (well, within reason--256 digits, or whatever the limit of Excel spreadsheets was at that time) and perform arithmetic calculations on them, including square roots. You would be surprised how many college-educated math majors do not know how to calculate square roots by hand...but I do. Most of them rely so much on computers or websites like WolframAlpha that they are forgetting "basic" knowledge like this.
Nevertheless, you are correct--computers only compute, they do not understand. We could even build computers sophisticated enough that we could show them a picture of a flower and it could regurgitate an "appropriate" written response that appears to include an emotional reaction, but computers will never be sophisticated enough (I suspect) to have an emotional reaction to the smell of hearty Italian food coming out of a brick oven. Sure, it could analyze the chemicals in the air and possibly deduce what it is, but the computer couldn't feel hunger or anticipate tasting the food. The whole Turing test thing, although advanced, still only scratches the surface.
I recall reading an article about a database that was programmed to make links between information and it eventually managed to recognize the image of a cat as "a cat" and, later on, it even asked "am I a computer?" but that isn't self-awareness and it certainly isn't consciousness.
Posted: Tue Apr 09, 2013 9:31 pm
by Zarathustra
I'm Murrin wrote:Define self awareness.
Well, that's part of the problem. It's like the Supreme Court's definition of porn: I know it when I see it. But the only one I can see is my own. So that gives little reassurance when faced with the problem of other minds. But I think reasoning by analogy, evolution, and (in this case) Occam's Razor works just fine to justify the belief that other
humans have minds and are self-conscious. Analogy doesn't work with machines.
But to get a little more rigorous, I'd define self-awareness as the willful, knowing control over one's own intentionality (i.e. the
directedness of consciousness upon objects of consciousness). That means one can direct his intentionality upon the phenomenon of his own intentionality. You can become aware of your awareness, and even aware of this awareness-of-your-awareness. However, at each stage there is always a sort of "bracketing" of the previous awareness to make it an object, and the "pure subject" itself is never fully grasped or viewed, because doing so makes it yet another object, which is still "viewed" by the subject, which remains distinct. Though this leads to an infinite regress, a kind of "dog chasing his own tail" type of mental state, the realization of this infinite regress is itself a kind of indirect awareness of the whole self.
So, obviously, this is much more than recognizing your physical self in a mirror. It is recognizing the phenomenon of your own directed attention, both the object and the subject which perceives the object, and the simultaneous distinction-and-union of these two as a phenomenal event. No robot does anything remotely similar to this, because they don't have inner lives or consciousness of which to be aware.
I'm Murrin wrote: A human mind is only a more complex computational machine.
If that were true, so many of us wouldn't struggle so hard to learn math. And we wouldn't make computational errors ... which computers don't do (if they err, it's because we programmed them incorrectly). Computers calculate by blindly following an algorithm. While we can mimic this behavior--and we'd have to, otherwise we couldn't design and program computers--this isn't necessarily how we do math (as Vraith and Hashi also point out). We understand that 2+2=4 due to apodictic certainty, not because we've been programmed to output this result by churning through an algorithm.
Back before computers, when the clock was the pinnacle of mechanical precision, it used to be our standard model for natural things that display order. We'd use it as an analogy to the universe, or to our minds. And now that we have computers, our analogy has switched to this newer symbol. But it's still just a symbol. Our minds aren't computational machines, no more than weather is a mathematical simulation, or landscape is a map.
I'm Murrin wrote:To be clear, I'm referring to the ability not only to recognise its own shape, but to distinguish between itself in a mirror and other identical machines. That's exactly how they test for self-awareness in animals.
But you're making an analogy between a biological, living organism which is the product of self-organizing principles of evolution, to a manufactured artifact. The same conclusions can't be drawn between the two, no more than we can use the order of the universe to prove that there is a "Grand Watch Maker" who designed it. You might as well say that the people in movies are living, conscious beings merely because they look like it onscreen. Or the characters in books are independent, conscious beings because they feel so real. There has to be a difference between simulation and reality, otherwise these words have no meaning and you could just as easily conclude that our own self-awareness is merely a simulation. Philosophical zombies.
Posted: Tue Apr 09, 2013 9:42 pm
by I'm Murrin
Our consciousness is a simulation, an emergent one created by the complex interactions of out many cerebral functions. Yes, the difference is design - in that we can actually make a machine do things directly that our brains do through a convoluted mishmash of processes that got pushed in that direction by evolution.
A lesser animal is simply a less complex machine, closer to the robots we're currently starting to produce, creatures that run mainly on input and output as determined by their neural programming.
(And of course these words have no meaning. Why would they have any? The entire universe has no meaning.)
Posted: Wed Apr 10, 2013 4:50 am
by Avatar
SerScot wrote:I would very much like to believe that I'm more than a facilitator for those interesting little self replicating molecules in my body. And that my ability to ponder these questions indicate more than an odd mutation that doesn't add much to the universe at large.
That doesn't make it so though.
--A
Posted: Wed Apr 10, 2013 9:53 am
by Fist and Faith
Avatar wrote:SerScot wrote:I would very much like to believe that I'm more than a facilitator for those interesting little self replicating molecules in my body. And that my ability to ponder these questions indicate more than an odd mutation that doesn't add much to the universe at large.
That doesn't make it so though.

True enough. Also, "doesn't add much to the universe" is a judgement. IMO, it is the most important thing in the the universe. The part that gives the universe meaning. The fact that the rest of the universe doesn't care doesn't concern me in the least.
Posted: Wed Apr 10, 2013 11:34 am
by SerScot
F&F,
It's not about whether the Universe cares if I'm conscious or not. In my opinion, it's about whether or not our being Conscious has impact on the Universe or not.
Posted: Wed Apr 10, 2013 3:37 pm
by Vraith
SerScot wrote:F&F,
It's no about whether the Universe cares if I'm conscious or not. In my opinion, it's about whether or not our being Conscious has impact on the Universe or not.
Yes.
[and from other posts, I think FF will agree as well]
One of the things "meaning" IS is our consciousnesses impacting the universe.
The Beautiful existential branch says yes, we do.
The Ugly branch says no, we don't.
The Dead branch says it is irrelevant.
Posted: Wed Apr 10, 2013 4:05 pm
by SerScot
Vraith,
So, you agree that finding "meaning" means consciousness is more than mere awarness but something ineffable?
Posted: Wed Apr 10, 2013 5:40 pm
by Vraith
SerScot wrote:Vraith,
So, you agree that finding "meaning" means consciousness is more than mere awarness but something ineffable?
Heh...well...yes and no to be what may be too quibbley.
meaning is generated/created more than "found."
consciousness is a holistic thing/term I think, for a sort of meta-[and multi-]awareness.
It encompasses the volitional nesting/bracketing POV that Z was talking about, and also lateral moves...the separation and/or integration of different kinds/brackets of awareness.
And to be clear, though it's tangential, I don't think their is anything passive in it. Awareness and consciousness are never "being," they are always doing/happening...even if what is being done is "being in the moment." Stillness is active.
It certainly is ineffable in the strict sense...I don't think words will ever completely contain/describe it. But they can say something about it. Math and neurobiology/physics/chemistry can/will say many things about it, too.
But there will always be mystery, spaces both full and empty that are inexpressible.
Posted: Wed Apr 10, 2013 5:57 pm
by Zarathustra
I'm Murrin wrote:Our consciousness is a simulation, an emergent one created by the complex interactions of out many cerebral functions. Yes, the difference is design - in that we can actually make a machine do things directly that our brains do through a convoluted mishmash of processes that got pushed in that direction by evolution.
Well, our perception of the world is a "simulation" of the world, but our consciousness itself isn't a simulation. We're not unaware zombies who merely appear aware to the outside world through our behavior. We actually have an inner life which generates that behavior. We're actually conscious. [However, I have to note that you're not alone in this opinion.
Daniel C. Dennett seems to argue that we're not really conscious and that self-awareness is a simulation, too.]
I think what you're arguing is
functionalism, which has many problems, but also many supporters (many of whom are in the AI fields).
A lesser animal is simply a less complex machine, closer to the robots we're currently starting to produce, creatures that run mainly on input and output as determined by their neural programming.
I don't understand your confidence in this analogy you keep making. How can you be sure that animals are like machines? How can you not see that this is primarily an analogy between two disparate entities? It's like calling the sounds of the wind and the sea a "song." It's poetry. Just because animals and machines both are made of matter and have internal structures doesn't mean they are the same. Is the sun a machine? Is the universe a machine? Is the moon an animal? You can't just use these words interchangeably for any matter than has organization and functions. Seriously, I'm not trying to be a smartass here ... how is an animal a machine? You might as well compare them to puppets. A robot is like a puppet without strings, that obeys our commands through "strings" of code. So instead of calling them "closer to a robot," why not call them "closer to puppets?" It makes about as much sense to me.
The universe has no purpose or inherent, absolute value. That's not the same as having no meaning, or nonsensical, or being inexplicable.
Posted: Wed Apr 10, 2013 6:10 pm
by I'm Murrin
They're deterministic systems that respond to stimuli. It's remarkable that these chemical processes have developed into such complicated forms, that these emergent systems have appeared at a high level of many interactions, but still, just the results of chemistry. To say otherwise you may as well be suggesting that living things have souls, an essence distinct from their physical matter. We are our minds, and our minds are matter, nothing more.