Page 1 of 3

It's Life Jim.......

Posted: Tue Jan 20, 2015 12:49 pm
by peter
If we ever develop AI to the point where we can be as sure that we have created 'sentience' to the same degree of certainty that we can ascribe it to our fellow human beings [lots of heavy 'philosophy' in that alone] will we consider it to be 'alive'.

Looking at the standard criteria by which we define life, we have 'homeostasis'; well our AI may well be able to do this if appropriately programed, and the same must apply to perhaps all of the other six criteria [organise, metabolise, grow, adapt, respond and reproduce]. So we can 'create' our sentient and intellegent entity to meet these criteria if we so choose. Will we have created life then - or something different. Leading on from this, if we ever get so far as to encounter 'alien intelligence' will it conform to our understanding of what it constitutes to 'be alive' and how will this colour our ethical perception of how we must treat with such sentience. In the light of our current knowledge is it time to 'broaden our definition' of what it means to 'be alive'.

[Should this be in The Close?]

Posted: Tue Jan 20, 2015 2:14 pm
by Zarathustra
We'll create artificial life long before artificial intelligence (just like nature itself, with a gap of billions of years for nature). So this question might be putting the cart before the horse. Also, whether or not something that's intelligent is also alive is a tricky question. I think a simulation of intelligence is still intelligence, and yet a simulation isn't alive.

When you add sentience into the mix, we can no longer be talking about a simulation. If a simulation were aware of itself, it wouldn't be simulating anything (else), it would be real (to itself). An appearance of reality--which is aware of the reality of its appearance--is reality.

But I think we'll find that consciousness can't come about except through combinations of matter that we'd consider alive. It's a physical process that causes a feeback loop in reality itself, such that parts of reality (matter) become aware of themselves. That's not an algorithm (which is purely ideal/formal, not material). We're not going to build consciousness in a computer. Math isn't aware of itself, no matter how complex your equations are.

We first need to understand how consciousness arises in living matter to begin with before we can attempt this question.

Posted: Tue Jan 20, 2015 11:01 pm
by Fist and Faith
Excellent post, Z. All of it, though I particularly like the third paragraph. And, as far as us making AI goes, Yes to your last sentence. I wouldn't (and you didn't) say consciousness is not possible without living matter. But it's the only kind we know of and can study.

I can't imagine how many people have said it, but The Accidental Mind discusses how the brain is just one bit slapped onto another. Leftovers from previous stages of evolution are there, when another idea would certainly be better for us. And yet, it's the brain that has our consciousness and intelligence. It doesn't sound like it would be easy to understand how consciousness arose from such a mess. Which would make creating AI a bit difficult.

Posted: Wed Jan 21, 2015 4:55 am
by Avatar
But there are (perhaps significant) differences between consciousness / sentience and reasoning.

Consciousness alone (self-awareness) may simply be a function of the number of neural connections.

Everybody has seen this, right? Worm Brain Uploaded Into Robot. They mapped the 302 neurons and 7,000 synapses that made up the brain of the roundworm, and simulated it. It behaved the same way the worm did.

The human brain has 100 billion neurons and perhaps 1,000 trillion synapses.

Is the worm conscious, or sentient? Probably not.

Cats have 300 million neurons. Are they conscious and sentient? I'd say so. Can't reason though.

As far as I can see, the human brain has more neurons and synapses than any other animal on the planet.

Why should those connections have to be biological? Why not electrical or silicon or digital? If a simulation can match those connections, why shouldn't it function like the much simpler simulation of the roundworms brain?

--A

Posted: Wed Jan 21, 2015 8:21 am
by Fist and Faith
I wouldn't (and Z didn't) say consciousness is not possible without living matter. But it's the only kind we know of and can study. It's not likely that simply making a huge number of connections, regardless of what they're made of, will result in consciousness, sentience, or reasoning. I would guess those connections have to have certain characteristics, or functions. Maybe a certain minimum is needed overall, and a minimum of X% must be geared toward Function 1, Y% toward F2, etc. Maybe other arrangements are possible, but surely not any arrangement.

Posted: Wed Jan 21, 2015 11:24 am
by peter
Interesting. re the worms being concious or otherwise - there is evidence that some higher brain functions go way further 'down' the scale of life than previously thought; fly's are apparently pretty good at learning [though what I don't currently recall].

Not sure about the simulation/sentience mutual exclusivity thing Z. As I mentioned in a recent post elsewhere 'coal-face' theoretical physicists propose that our entire Universe could be no more that a simulation and [most of the time] I feel moderately sentient.

Again I have reservations about the brain being 'slapped together' [;)] out of leftovers. Most of the lower levels are still critical for normal brain function - in some cases more so than the outer layers [take the limbic system away and you've got BIG trouble I understand].

I can't help feeling that we will develop AI before we will create 'carbon based' life in a test-tube. The infusion of the 'spark of life' [and not in a religious sense] seems to be at present a greater mystery to science than the workings of the brain [perhaps due to the relative amounts of effort put into the respective endevours] and while continuous reports of advances are being made in respect to the latter, in the case of the former science seems to have reached a stone wall. I think AI will be developed and introduced to us so slowly that we will barely notice it's coming. An article I read recently iterated how 'smart' technology is becoming ever more integrated into our lives [heating systems that detect how close you are to home and turn on in advance of your getting there, systems to monitor your health, alert you and pre-arrange your medical appointment etc] to the point where in a short while they themselves will become indistinguishable from AI. By the time we eventually ever meet a genuine intelligence from 'aways out there' we may already be so au fait with dealing with alternate forms of intelligence that it will already be second nature to us.

Posted: Wed Jan 21, 2015 2:25 pm
by Fist and Faith
peter wrote:Again I have reservations about the brain being 'slapped together' [;)] out of leftovers. Most of the lower levels are still critical for normal brain function - in some cases more so than the outer layers [take the limbic system away and you've got BIG trouble I understand].

The point David Linden is making in The Accidental Mind is something like this... If you take a computer apart, you won't find that silicon chips are on top of vacuum tubes; which, in turn, are on top of Babbage's Analytical Machine; which is on top of Pascal's Calculator; with an abacus at the bottom of the stack. You won't find any magnetic tape in there. As our understanding of things grew, our desire to accomplish more with our computers was met with entirely new designs.

The brain, otoh, is new things piled on top of old things. You'd never design one that way. It still does what is, imo, the most extraordinary thing that anything in the universe does. But it does not do so because it is elegant and efficient. "In many cases, the brain has adopted solutions to particular problems in the distant past that have persisted over time and have been recycled for other uses or have severely constrained the possibilities for further change."

Posted: Wed Jan 21, 2015 3:13 pm
by JIkj fjds j
There always seems to be too much importance placed on the brain.
I'm no science bod, but I was once very fascinated by a mechanical reaction I would notice when I used to smoke tobacco. Which was, I would reach for a smoke sometimes as much as several seconds before I was aware of doing this.

I thought that if I were able to understand this odd reaction I could then, nip it in the bud, and quit the smoking habit for keeps.
(I did, eventually. And it was through a complex collection of opposing habits that enabled me to see things from the abstract and the lateral viewpoint.)
I woke up one morning and just decided that I couldn't light up a smoke. It made no more sense to me than hitting myself over the head with a rolled up newspaper.

I can imagine the invention of AI would be rendered pointless if all the focus is put on a brain-like machine. After all, the human body is more than a central processing organ. (And the sum of its parts.)
In a conversation I once had with someone (about quitting smoking), she said that there seems to be no reason why our other organs can't think, or even feel.

We might get kidney stones. There're bloody painful. The organ is therefore connected to the nervous system. It has pain receptors. Is it unusual to suppose that the kidney has some form of thought process, with its own emotional group?

Perhaps the creation of AI has to be more than just a set of components. Maybe there also needs to be a kind of vortex in which the components can function. Like stirring milk and sugar in a cup of tea.

But I suspect that if we were to know what that vortex would be and where it might be found we would discover that there would need to be two off. Or maybe even three off. Or four ...

:2c:

Posted: Wed Jan 21, 2015 5:49 pm
by Zarathustra
Avatar wrote:Consciousness alone (self-awareness) may simply be a function of the number of neural connections.
Perhaps, but I think it probably goes down much deeper than the level of neurons, into structures that participate in quantum effects.
Worm Brain Uploaded Into Robot. They mapped the 302 neurons and 7,000 synapses that made up the brain of the roundworm, and simulated it. It behaved the same way the worm did.
But you're just making a claim about behaviorism. As far as we know, they made a good simulation.

F&F, I *do* think that consciousness can only come about from living matter. That's not to say we won't create sentient beings, but I think they'll be alive, not mere machines.

Posted: Wed Jan 21, 2015 6:04 pm
by Wildling
Zarathustra wrote:
F&F, I *do* think that consciousness can only come about from living matter. That's not to say we won't create sentient beings, but I think they'll be alive, not mere machines.
What would be the real difference between a metal or plastic machine and the organic machines that we are? Does the material it's made of make that much difference in whether something can have consciousness?

Posted: Wed Jan 21, 2015 6:22 pm
by Hashi Lebwohl
Avatar wrote: Cats have 300 million neurons. Are they conscious and sentient? I'd say so. Can't reason though.
--A
I have to disagree with this assessment. My wife's cat (well, the one she had years ago) figured out that her human was in control of the red dot and thus stopped chasing it.

Posted: Wed Jan 21, 2015 7:14 pm
by Vraith
Hashi Lebwohl wrote:
Avatar wrote: Cats have 300 million neurons. Are they conscious and sentient? I'd say so. Can't reason though.
--A
I have to disagree with this assessment. My wife's cat (well, the one she had years ago) figured out that her human was in control of the red dot and thus stopped chasing it.
Yea...there's a lot of "reason" in the animal world. But there is definitely a significant difference in reach and scale, and probably a difference in kind.

On other matter: I don't think there is much physical difference between alive and not alive matter---but real intelligence/sentience would transform a 'thing' into a 'being.' [that still leaves a whole bunch of grey territory between states, though...]

Someone upthread [Z, maybe?] was right about creating life before creating intelligence.
In some sense we're already damn close. We can and have created synthetic genes and inserted them into things...and the things go on living, and reproducing, and they do the things the synthetic genes tell them to do.

I've said before about AI---a lot of very smart people think we need to know what something IS before we can create it [like intelligence]. But that stuff points to the other option, a thing we've done before---we can and do make things often before we really know what they are.
We don't know what life is...[do we?]...but we are damn close to making it. [I think I recall that they'd actually replaced entire chromosomes in several kinds of critters. Maybe not making life like god yet---but at least half-life, so we're half-god].

Posted: Wed Jan 21, 2015 9:16 pm
by Zarathustra
Wildling wrote:
Zarathustra wrote:
F&F, I *do* think that consciousness can only come about from living matter. That's not to say we won't create sentient beings, but I think they'll be alive, not mere machines.
What would be the real difference between a metal or plastic machine and the organic machines that we are? Does the material it's made of make that much difference in whether something can have consciousness?
It's not the material, it's how that material is arranged. After all, a dead person is made of the same material as a live person, but only the latter is conscious.

I think that consciousness is so complex--taking part of physical processes that we don't understand--that the arrangements of matter that give rise to consciousness will already have the lower level complexity (lower than consciousness, at least) of living organisms.

If it doesn't make sense to say that a computer becomes alive if its software is sophisticated enough, then why would it make sense to say a computer is sentient if its software is sophisticated enough? Life is less complex than sentience.

I think people have it backwards when they say that sentience itself confers life, rather than being dependent on it. For instance, Star Trek's Data is considered a lifeform/alive (to the characters) merely because he's sentient, despite the fact that he's not a living organism. It's as though it's easier for people to think of conscious machines than living machines, because the latter makes the contradiction clear: if it's alive, it's not a machine. Why are we only able to accept machines as "alive" if they're conscious?

I think the answer to that question lies in the fact that consciousness is so obviously a feature of living things. We just forget this when we confuse our latest model/metaphor for the mind--a computer--with the mind itself.

I don't believe consciousness is a relation of ideas with each other (e.g. numbers, programs, algorithms, etc.). I think consciousness is a relation of the universe to itself, i.e. matter/energy. I don't believe computers can achieve this structure through software upgrades--even if we're upgrading the hardware to handle the software--because all computers have the same basic architecture and function. Physically, they all function as universal Turing machines, no matter how much faster/smaller we make them. [Quantum computers are a different matter ... literally! :) ].

Since all computers physically function the same, the only real difference we're talking about here are software differences. So by saying we can build a computer that can become conscious, we're merely talking about writing a program that is conscious. In other words, we're saying that the matter doesn't matter, and that consciousness is merely a relation of ideas to themselves. Or, it's like saying that consciousness can be separated from its embodiment, much the same way we can run the same program on millions of different computers ... or even on an abacus. Could a sufficiently complex abacus be sentient?

Or think of it this way: could a book containing all the lines of code of a program be conscious? What if we included detailed instructions for a reader of the book to do the same input/output operations on that book that a computer would do if it were running the program? Would that book/program be conscious then?

I don't believe it's enough to have the right software + input/output. I think consciousness arises through special configurations of matter that are complex enough to become conscious because they are already conscious enough to be alive. And, as a side benefit, they're also complex enough to mimic things that computers do. This fools us into thinking that a computer could return the favor and mimic what we do, too. But I think that's a logical fallacy. Remember (those of you who have discussed Penrose with me), humans can understand Gödel's Theorem, but no computer could. This has been mathematically proven. A computer program is a formal system. Gödel's Theorem proves that no formal system is simultaneously complete and consistent.

Posted: Wed Jan 21, 2015 10:59 pm
by Fist and Faith
I'm not too clear on your position on some things, Z...

-What is the definition of "life"? By "if it's alive, it's not a machine", I'm not sure if you're saying it would no longer be considered a machine, but would be called a life form; or that it's not possible for something made out of metal, wires, plastic, etc, - machinery - to be alive.

-Is it possible to be a lifeform, but not a living organism?

-"Why are we only able to accept machines as "alive" if they're conscious?" Possibly because we don't know how to tell if something without a biology that we recognize is alive if it doesn't show signs of consciousness. If energy discharges in a nebula are alive, how would we know it?

-What is the mechanism that allows the relation of the universe to itself? What I mean is, electrochemical reactions, the presence of this and that chemical, the reabsorption of those chemicals, the substances that makeup the axon, etc etc, don't particularly seem to suggest consciousness. So what is it about the arrangement of those things that gives consciousness? And why is it not possible to arrange other things, like electricity, silicon chips, magnets with information stored on them, etc, in a way that gives consciousness?

Posted: Thu Jan 22, 2015 4:25 am
by Zarathustra
FF, I can't answer the questions about consciousness because no one knows what makes us conscious. I've talked about Roger Penrose's book SHADOWS OF THE MIND, in which he speculates that it has more to do with microtubules within nerve cells rather than the neurons themselves, much less the arrangement of neurons. He speculates that it must be structures of matter that are small enough to engage with quantum effects, because consciousness needs something as bizarre as quantum mechanics to account for the fact that it transcends what digital computers (or any formal system) can do.

Our neurons might in fact turn out to run like parallel processors in computers. But the processing itself--or even its input/output--isn't the same as the consciousness that can take that input/output as its object, and form judgments, opinions, emotions, values of it. There is the computational result of 1 + 1 = 2, which any computer can also do. But then there is the understanding of this equation, the knowing that the answer is right because one understands not only the syntax of the symbol manipulation (which is all a computer ever does), but also the semantics of the "language" of math itself, what it means. Computers can't understand, don't have that "aha moment" of realizing that they have the right answer. They just spit out results of algorithms. There is no eidetic certainty for computers, that phenomenological knowing which tells us that something like modus ponens works--not because some programmer gave us this rule and we're hardwired rule-followers, but because we can see its logical necessity by understanding what logical necessity means.

I'm not sure about the definition of life, either. I know it's murky even in biology. I'll leave that for others.
And why is it not possible to arrange other things, like electricity, silicon chips, magnets with information stored on them, etc, in a way that gives consciousness?
I think it's possible, but that arrangement won't be a Turing machine (i.e. computer). Our brain is not a computer. There's absolutely no evidence that consciousness is anything like an algorithm running on biological hardware. This is just a metaphor that people have dreamed up and then taken it literally, along with some bad philosophy (e.g. behaviorism, functionalism, etc.).

Whatever arrangement we make out of silicon, magnets, etc. that ends up forming consciousness, it will be something more like our brain than a computer. I personally believe it will be complex enough to be alive, and only a "machine" in the sense that it has been built. But whatever definition of "life" or "organism" we settle upon, it will fall into that category ... a category which no possible computer--no matter how complex its algorithms--could ever possibly fall into. It has to do with the logical architecture of computers, which is actually pretty simple (as laid out by Alan Turing many years ago). That's why you can build computers out of vacuum tubes, transistors, or hell even scraps of paper.

Posted: Thu Jan 22, 2015 5:32 am
by Avatar
Vraith wrote:
Hashi Lebwohl wrote:
Avatar wrote: Cats have 300 million neurons. Are they conscious and sentient? I'd say so. Can't reason though.
--A
I have to disagree with this assessment. My wife's cat (well, the one she had years ago) figured out that her human was in control of the red dot and thus stopped chasing it.
Yea...there's a lot of "reason" in the animal world. But there is definitely a significant difference in reach and scale, and probably a difference in kind.
Haha, yes, sorry, I meant more abstract reasoning. I have cats. :D They know if they come and scratch at my mattress at 2am I will get up and give them food. :lol:

--A

Posted: Thu Jan 22, 2015 12:36 pm
by peter
I have no problem in Kieth Moon throwing a TV out of a hotel window [well it's a waste and unessacarily destructive - but hell, nothing is 'killed' and he can pay for the damages] but ethically I have problems in unessesary killing even if the organic organism killed is not 'sentient'. This is ridiculous; I'm in the position of being prepared to happily dismantle a sentient 'machine' because it 'is not alive'. I'm going to have to get over this big time in order to be able to treat ethically with self-aware entities that are not alive. Can you 'kill' something that is not alive - can you even 'hurt' it? There will be alternate grades of existance that we will have to consider very deeply in order to make ethical sense of how we treat them - and we're already bloody bad at doing that with our own kind.

[Fist - take the point about 'brain design.]

[Vizidor - that was a really interesting post; re the cigarette thing, work in this very field that shows brain activity occuring prior to 'apparent' decisions being made by the 'conciouss brain' is casting doubt on the very notion of 'free will' in humans as we speak. I love the idea of sentience being more widespread than we give it credit for; don't know if it's significant here but learned the other day that a decapitated fly will live for two or three days after the 'opperation'. Weird but [apparently] true :lol: ].

Posted: Thu Jan 22, 2015 1:47 pm
by Zarathustra
peter wrote:...work in this very field that shows brain activity occuring prior to 'apparent' decisions being made by the 'conciouss brain' is casting doubt on the very notion of 'free will' in humans as we speak.
I've been reading about such claims for nearly 20 years. Daniel C. Dennett talked about such research in his CONSCIOUSNESS EXPLAINED. I don't buy it.

These experiments are usually of the type where the subject is asked to, say, push a button when they see a light come on. There is a measured spike in brain activity in the regions that control the finger a fraction of a second prior to the brain activity we associate with conscious decisions. From this simple sequence, it's concluded we don't have freewill, since the action was "decided" by our body before we consciously decided to make our finger move.

There are several things wrong with concluding there is no freewill from that type of experiment. First of all, the conscious decision to push the button has already been made prior to seeing the light (and thus prior to the measurements named above). You already know you're going to do it. You've already decided (freely) to comply. So the decision, the engagement of freewill, happens prior to the act which they're measuring and calling "conscious decision," ignoring the one that's already been made.

And this "already made" decision means you've decided to let your body engage in its quick-response mode, one that is usually needed for things like running away from predators or dodging a fist to the head. These quick reactions naturally operate at a subconscious level. They are similar to instincts and habits. No one ever associated them with freewill in the first place--though we can consciously try to change them, such as harnessing their power to learn to play a musical instrument or pushing buttons when lights come on. Many conscious actions can be turned into "muscle memory" actions. That's why practice makes you better. You "off-load" more and more of your conscious decisions to automatic responses, and you play faster/smoother/"in the groove."

Secondly, a measurement of activity in the "conscious decision" area of the brain does not necessarily mean that's the seat of one's freewill. Perhaps that's registering the consciousness-of-one's-conscious-decision. If consciousness itself arises from a lower level of the brain than neurons (as some are now speculating, e.g. Penrose), the neurons themselves might still present an "appearance" to the brain of this consciousness, and that could be what these experiments are measuring, because they're designed to measure neural activity (and not these lower levels). After all, one's own consciousness can be an object of thought/awareness. Our consciousness is constantly looping back on itself like a dog chasing its tail. Perhaps these experiments are measuring the tail part of that loop, rather than the head. An appearance of one's own consciousness to oneself might be what most people are confusing for their "conscious decision," when it's actually farther back in the loop before they started becoming aware of their own awareness in a reflective way. But that doesn't make the first awareness any less free or aware.

Posted: Thu Jan 22, 2015 2:25 pm
by peter
Good sound post Z.

[In case anyone see's this who saw my mentioning somewhere else [I forget where and can't seem to find it] that when I try to post from the 'fast respnse box' at the bottom of the page my posts 'get lost' and I go straight to the normal post reply page [where the box is empty and my post has vanished] - this happened four times when I tried to make the above comment, so it's either my machine ir K's W. that has a problem, not for once me [;)].

Posted: Sun Jan 25, 2015 3:36 pm
by Zarathustra
One also has to wonder why nature supplied us with an "illusion" of freewill (if that's what it is, which I don't believe), if we dont' have it. What's the point of wiring the brain so that it feels like it is making a conscious decision when that's entirely unnecessary (since the "decision" has already been "made"). It would be superfluous, and a waste of energy and brain resources. It's also damn coincidental. Too coincidental. An illusion that happens a fraction of second after a "decision" to make us feel like we're in charge? It's like a cosmic prank. Ridiculous.