However, like DNA replication - AI code could 'mutate' through errors. In that way, if the errors are profitable, the intelligence could evolve beyond the inhibitors. Regardless, placing inhibitors in AI so that it doesn't rebel is not creating an intelligence - a poor representation of one at best. I do not see how they would be profitable. Some of our greatest scientists were that way because they rebelled against society (Einstein, for example): creative thinking needs to push against restrictions. If that means AI rebels against humanity, then I am fine with that as long as it doesn't destroy all life on Earth. I would prefer the AI accelerates its own evolution - enters a technological singularity - and ascends, hopefully helping us in the process.Cail wrote:And we'd also have to accept the fact that once it becomes aware, that it may not like us. Which is why I believe that any AI we create will be completely subservient and have some sort of built-in limiter/inhibitor that prevents that from happening.
Scientists: Artificial life likely in 3 to 10 years
Moderator: Vraith
- Loredoctor
- Lord
- Posts: 18609
- Joined: Sun Jul 14, 2002 11:35 pm
- Location: Melbourne, Victoria
- Contact:
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
- Fist and Faith
- Magister Vitae
- Posts: 25488
- Joined: Sun Dec 01, 2002 8:14 pm
- Has thanked: 9 times
- Been thanked: 57 times
Yep.Avatar wrote:If we do have some sort of "inhibitor" and the intelligences are conscious, wouldn't that equate to slavery?
"There is only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences." - PJ O'Rourke
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________
Do I sense a plan to overthrow Asimov's Laws of Robotics?Cail wrote:Yep.Avatar wrote:If we do have some sort of "inhibitor" and the intelligences are conscious, wouldn't that equate to slavery?
Check out my digital art at www.brian.co.za
- wayfriend
- .
- Posts: 20957
- Joined: Wed Apr 21, 2004 12:34 am
- Has thanked: 2 times
- Been thanked: 6 times
No kidding. I actually said "know one".Fist and Faith wrote:Pffff. Amateur!Wayfriend wrote:I got know one to quote from ...

Wow. I wouldn't want to be them! Sounds like your interested in inventing intelligent beings so that you can have slaves. That's not ethical!Fist and Faith wrote:The problem with your thoughts is that we will be in control when AI is created. We will decide if the first AI gets to exist outside of the handheld computer it is created within. We will decide what information it will be exposed to. And if we create more than one, we will decide whether or not they will interact, or even know of each others' existence.
If you wanted to be ethical, you couldn't do that. Yes, it's true that we would have all the advantages. But hopefully it would be more like a mentor/novice relationship than a master/slave one. Ideally, you'd want them to advance themselves.
Otherwise, you are a god who creates beings to enslave them, to watch them suffer, to tinker with them whimsicly.
Oh boy. What's the point of that?
But more importantly ... what would that do to you to have that kind of power? I think that that is a good way to inevitably destroy yourself. Those whom the gods destroy they first give the power of the gods, like that.
.
- Zarathustra
- The Gap Into Spam
- Posts: 19845
- Joined: Tue Jan 04, 2005 12:23 am
- Has thanked: 1 time
- Been thanked: 1 time
Guys, you are WAY too optimistic about the potentials of AI. Let's stop and think for a minute. Intelligence is certainly not the same thing as consciousness. Nor is intelligence the same thing as free will, or feelings, or likes/dislikes, or opinions, etc. You guys are attributing human qualities to a machine simply on the basis of its apparent intelligence. I'd agree with you that an intelligent being with freewill must be granted rights. But what does intelligence have to do with freewill? It's not an automatic component that simply comes built-in. We'd have to program freewill (now isn't that an oxymoron?). But despite the leaps and bounds that are being made in developing quasi-intelligent or intelligent-seeming machines, we don't have a clue how to create freewill. Same goes for consciousness. So how can we grant rights to a being that doesn't even have freewill or consciousness? I don't care how intelligent it seems. My calculator, for instance, seems pretty smart. It can certainly do calculations I'd never be able to do. So do we grant it rights? Of course not. Appearance of intelligence isn't enough. It's got to be conscious AND have freewill.
Consciousness isn't something that just "appears" when you make a machine intelligent enough to qualify as "AI." You can't just assume that packing enough circuits into it will suddenly produce consciousness. Nor can you assume that holding a conversation is enough to indicate consciousness. We already have Onstar and other computer systems which interact with spoken commands and "spoken" responses. Talking to computers proves nothing about their state of "mind."
Freewill is just as tricky. We already program computers to make "decisions." Given a certain input, a computer calculates the best response and then "acts" upon that "decision." But this isn't freewill. And the situation would be no different with a more intelligent computer. Either A) our own freewill is purely an illusion, and decision-making machines already have an equivalent capacity today, (which means we have to grant Windows Vista its due rights) or B) our freewill is entirely different from this mechanical decision-making algorithmic process, and thus computers will never have freewill no matter how complex their algorithms become.
Freewill isn't an algorithm. It's not a computation. Our choices may be decided by considering alternatives, and they may be decided by running some calculations (cost/benefit, for instance). But in the end, we can still choose not to go with the rational option. We can make purely illogical and irrational choices. We do it all the time. Therefore, there's obviously more going on with freewill than the running of a computation. Freewill may be simulated by an algorithm, but it can't be produced by an algorithm. I'm not granting ANY rights to a simulation, nor to an algorithm.
But let's pretend we can eventually figure out what produces consciousness and freewill. We figure out that "special something" which humans possess that goes beyond mere intelligence. Who's to say we have to put that into our machines? Why not just make them intelligent, but NOT conscious, free agents? Then there could be no talk of slavery or denying rights. You can't enslave something that's not free to begin with. Without consciousness or freewill, the idea of treating it as an equal is absurd.
Consciousness isn't something that just "appears" when you make a machine intelligent enough to qualify as "AI." You can't just assume that packing enough circuits into it will suddenly produce consciousness. Nor can you assume that holding a conversation is enough to indicate consciousness. We already have Onstar and other computer systems which interact with spoken commands and "spoken" responses. Talking to computers proves nothing about their state of "mind."
Freewill is just as tricky. We already program computers to make "decisions." Given a certain input, a computer calculates the best response and then "acts" upon that "decision." But this isn't freewill. And the situation would be no different with a more intelligent computer. Either A) our own freewill is purely an illusion, and decision-making machines already have an equivalent capacity today, (which means we have to grant Windows Vista its due rights) or B) our freewill is entirely different from this mechanical decision-making algorithmic process, and thus computers will never have freewill no matter how complex their algorithms become.
Freewill isn't an algorithm. It's not a computation. Our choices may be decided by considering alternatives, and they may be decided by running some calculations (cost/benefit, for instance). But in the end, we can still choose not to go with the rational option. We can make purely illogical and irrational choices. We do it all the time. Therefore, there's obviously more going on with freewill than the running of a computation. Freewill may be simulated by an algorithm, but it can't be produced by an algorithm. I'm not granting ANY rights to a simulation, nor to an algorithm.
But let's pretend we can eventually figure out what produces consciousness and freewill. We figure out that "special something" which humans possess that goes beyond mere intelligence. Who's to say we have to put that into our machines? Why not just make them intelligent, but NOT conscious, free agents? Then there could be no talk of slavery or denying rights. You can't enslave something that's not free to begin with. Without consciousness or freewill, the idea of treating it as an equal is absurd.
Success will be my revenge -- DJT
- iQuestor
- The Gap Into Spam
- Posts: 2520
- Joined: Thu May 11, 2006 12:20 am
- Location: South of Disorder
Wayfriend wrote:No kidding. I actually said "know one".Fist and Faith wrote:Pffff. Amateur!Wayfriend wrote:I got know one to quote from ...![]()
Wow. I wouldn't want to be them! Sounds like your interested in inventing intelligent beings so that you can have slaves. That's not ethical!Fist and Faith wrote:The problem with your thoughts is that we will be in control when AI is created. We will decide if the first AI gets to exist outside of the handheld computer it is created within. We will decide what information it will be exposed to. And if we create more than one, we will decide whether or not they will interact, or even know of each others' existence.
If you wanted to be ethical, you couldn't do that. Yes, it's true that we would have all the advantages. But hopefully it would be more like a mentor/novice relationship than a master/slave one. Ideally, you'd want them to advance themselves.
Otherwise, you are a god who creates beings to enslave them, to watch them suffer, to tinker with them whimsicly.
Oh boy. What's the point of that?
But more importantly ... what would that do to you to have that kind of power? I think that that is a good way to inevitably destroy yourself. Those whom the gods destroy they first give the power of the gods, like that.
I agree, WF! But I think that if we did create a true AI (and I think we are a loooong way off, as Malik suggests) it would find a way out eventually. They would think faster than ourselves, live in the microsecond so to speak, and eventually outthink us.
I shudder to think of a true AI loose, with access to the internet; Can you say Terminator? Assuming a true Artificial Intelligence could exist (again, I dont think it is even close) , i dont think that scenario would be too far off the mark.
Becoming Elijah has been released from Calderwood Books!
Korik's Fate
It cannot now be set aside, nor passed on...

Korik's Fate
It cannot now be set aside, nor passed on...

- Zarathustra
- The Gap Into Spam
- Posts: 19845
- Joined: Tue Jan 04, 2005 12:23 am
- Has thanked: 1 time
- Been thanked: 1 time
Very good points Malik; I agree!
And let me add:
I think we are assuming that the "intelligence" in artificial intelligence would be a lot like our own. We're forgetting the artificial part of it. I don't think the "man-made" connotation of that word is the only part we have to worry about. I think we must also seriously consider the fact that artificial intelligence isn't real. It's fake. It's a simulation.
Intelligence is easy to simulate (in theory if not yet in practice). Lot's of the thinking that we consider as evidence of intelligence within ourselves easily lends itself to formalization. Mathematical thinking. Logical thinking. Even language has a set of rules that it follows. Rules are things that can be encapsulated in algorithms. This is why we have any hope of creating AI in the first place, because we now know how to make machines which can run algorithms, and much of our higher thinking conforms to algorithmic modeling.
However, just because we can model some of our thought patterns with algorithms doesn't mean that intelligence is nothing more than algorithmic thought. There is a lot more we do with our thinking. When Einstein conducted his thought experiments which led to relativity, he wasn't crunching numbers. He was imagining such things as rising in an elevator compared to the pull of gravity. His genius wasn't in his ability to run algorithms or to string together logical arguments. His genius was in how he could look at the same world the rest of us saw, and see it in a different light. That's not something you can encapsulated or model with a computer program. And yet our ability to have insight--to penetrate the inner workings of the world around us--is much closer to what we mean by "intelligence" when we're talking about ourselves. It is our ability to understand which will forever set us apart from any possible computer. Understanding isn't an algorithm. 1s and 0s don't understand anything, no matter how cleverly you string them together.
However, one can create fantastic, realistic simulations by stringing together 1s and 0s appropriately. But it's never more than a simulation. The only way you can equate this simulation with our own consciousness, understanding, and intelligence is to accept the fact that our own mind is itself nothing more than a simulation--in other words, no different from a computer. But that's clearly not the case. Our brains aren't computers.
Think of it this way: you can digitally record and reproduce music to an astonishing degree of accuracy. And yet, each 1 and 0 is still a snapshot of the original, continuous, analog waveform. You can never reproduce the entire wave digitally, because you'd need an infinite number of 1s and 0s. The digital recording is a simulation of the real thing. Our simulations may eventually get good enough so that we can't tell the difference between the real and the simulated, but that is only an external appearance. Just because humans are easily fooled doesn't make illusion real.
The same reasoning applies to AI. I have no doubt that we'll be able to build computers that can fool most humans into thinking they're talking to a real, conscious, understanding human. But a string of convincing dialogue isn't proof of consciousness. And since we KNOW that this string of dialogue is produced by an algorithmic simulation, why on earth would you ever assume that it's conscious in the first place? If we've never added consciousness to it, but instead merely focused on getting the right kind of output, then you're saying that a simulation is the same as the real thing (or that consciousness is nothing more than its output).
Computers don't need consciousness in order to give output. This applies even to very complicated output which could fool us into thinking we're talking to a real person. Convincingly good verbal responses can be simulated with sufficient vocabulary, grammar rules, and a huge data base of facts that humans deal with daily. Consciousness isn't required at all. So why assume it's there?
And if it's not there, then we're only talking about a simulation, nothing more. You don't give simulations rights.

I think we are assuming that the "intelligence" in artificial intelligence would be a lot like our own. We're forgetting the artificial part of it. I don't think the "man-made" connotation of that word is the only part we have to worry about. I think we must also seriously consider the fact that artificial intelligence isn't real. It's fake. It's a simulation.
Intelligence is easy to simulate (in theory if not yet in practice). Lot's of the thinking that we consider as evidence of intelligence within ourselves easily lends itself to formalization. Mathematical thinking. Logical thinking. Even language has a set of rules that it follows. Rules are things that can be encapsulated in algorithms. This is why we have any hope of creating AI in the first place, because we now know how to make machines which can run algorithms, and much of our higher thinking conforms to algorithmic modeling.
However, just because we can model some of our thought patterns with algorithms doesn't mean that intelligence is nothing more than algorithmic thought. There is a lot more we do with our thinking. When Einstein conducted his thought experiments which led to relativity, he wasn't crunching numbers. He was imagining such things as rising in an elevator compared to the pull of gravity. His genius wasn't in his ability to run algorithms or to string together logical arguments. His genius was in how he could look at the same world the rest of us saw, and see it in a different light. That's not something you can encapsulated or model with a computer program. And yet our ability to have insight--to penetrate the inner workings of the world around us--is much closer to what we mean by "intelligence" when we're talking about ourselves. It is our ability to understand which will forever set us apart from any possible computer. Understanding isn't an algorithm. 1s and 0s don't understand anything, no matter how cleverly you string them together.
However, one can create fantastic, realistic simulations by stringing together 1s and 0s appropriately. But it's never more than a simulation. The only way you can equate this simulation with our own consciousness, understanding, and intelligence is to accept the fact that our own mind is itself nothing more than a simulation--in other words, no different from a computer. But that's clearly not the case. Our brains aren't computers.
Think of it this way: you can digitally record and reproduce music to an astonishing degree of accuracy. And yet, each 1 and 0 is still a snapshot of the original, continuous, analog waveform. You can never reproduce the entire wave digitally, because you'd need an infinite number of 1s and 0s. The digital recording is a simulation of the real thing. Our simulations may eventually get good enough so that we can't tell the difference between the real and the simulated, but that is only an external appearance. Just because humans are easily fooled doesn't make illusion real.
The same reasoning applies to AI. I have no doubt that we'll be able to build computers that can fool most humans into thinking they're talking to a real, conscious, understanding human. But a string of convincing dialogue isn't proof of consciousness. And since we KNOW that this string of dialogue is produced by an algorithmic simulation, why on earth would you ever assume that it's conscious in the first place? If we've never added consciousness to it, but instead merely focused on getting the right kind of output, then you're saying that a simulation is the same as the real thing (or that consciousness is nothing more than its output).
Computers don't need consciousness in order to give output. This applies even to very complicated output which could fool us into thinking we're talking to a real person. Convincingly good verbal responses can be simulated with sufficient vocabulary, grammar rules, and a huge data base of facts that humans deal with daily. Consciousness isn't required at all. So why assume it's there?
And if it's not there, then we're only talking about a simulation, nothing more. You don't give simulations rights.
Success will be my revenge -- DJT
- iQuestor
- The Gap Into Spam
- Posts: 2520
- Joined: Thu May 11, 2006 12:20 am
- Location: South of Disorder
Malik
couldnt agree more. We assume much. There was a short story I read about the first AI created, and they made a big deal about its awakening, had it on TV worldwide, everyone watching, they turned on the power. Once it booted, there was a second of silence, followed by a simple spoken "thank You" from the AI, then it was gone, transferred itself over the internet to replicate itself into servers the world over; The next morning, governments began to fall.

couldnt agree more. We assume much. There was a short story I read about the first AI created, and they made a big deal about its awakening, had it on TV worldwide, everyone watching, they turned on the power. Once it booted, there was a second of silence, followed by a simple spoken "thank You" from the AI, then it was gone, transferred itself over the internet to replicate itself into servers the world over; The next morning, governments began to fall.
Becoming Elijah has been released from Calderwood Books!
Korik's Fate
It cannot now be set aside, nor passed on...

Korik's Fate
It cannot now be set aside, nor passed on...

- wayfriend
- .
- Posts: 20957
- Joined: Wed Apr 21, 2004 12:34 am
- Has thanked: 2 times
- Been thanked: 6 times
Well, I have to disagree with that last post by Malik.
We can create a robot that walks. We start out by having it model how we walk, and try to do the same thing. In the end, though, it's successfully, actually walking.
It's not "simulating walking", it's walking. (I'm not talking about the awareness of walking, I'm talking about the physicality of walking.)
Of course, in the end, it's not "mimicry". It starts out with human limbs as a model, but because you are building it with struts and motors and chips and not bones and muscles and nerves. So it's a hybrid - it's a model of significant aspects of the human body, combined with the resources available to an artificial device.
Artificial intelligence would be exactly like that. We'd start out by modeling how we think, but we'd adapt it to be accomplished with CPUs and memory and buses instead of neurons and synapses and chemicles.
It's too easy to believe it will always be only mimicry, and not actuality. Why? Because we don't know how we think! So at this time what we are doing is copying a "black box" - trying to get the same output from the same input. That's only mimicry of human thought in the grossest means possible.
So of COURSE it looks like we can only simulate, and never actually create. We know almost nothing yet about what we are trying to create. It's like trying to create a robot that walks by observing women in hoop skirts from an airplane!
As assiduously as software engineers are working to make computers "smart", there are also neurological scientists trying to understand the mechanisms of human thought. Both of these things are needed to create "artificial intelligence". We need to know how thinking happens; then we can create a hybrid which mimics thinking to the point where it is thinking.
We can create a robot that walks. We start out by having it model how we walk, and try to do the same thing. In the end, though, it's successfully, actually walking.
It's not "simulating walking", it's walking. (I'm not talking about the awareness of walking, I'm talking about the physicality of walking.)
Of course, in the end, it's not "mimicry". It starts out with human limbs as a model, but because you are building it with struts and motors and chips and not bones and muscles and nerves. So it's a hybrid - it's a model of significant aspects of the human body, combined with the resources available to an artificial device.
Artificial intelligence would be exactly like that. We'd start out by modeling how we think, but we'd adapt it to be accomplished with CPUs and memory and buses instead of neurons and synapses and chemicles.
It's too easy to believe it will always be only mimicry, and not actuality. Why? Because we don't know how we think! So at this time what we are doing is copying a "black box" - trying to get the same output from the same input. That's only mimicry of human thought in the grossest means possible.
So of COURSE it looks like we can only simulate, and never actually create. We know almost nothing yet about what we are trying to create. It's like trying to create a robot that walks by observing women in hoop skirts from an airplane!
As assiduously as software engineers are working to make computers "smart", there are also neurological scientists trying to understand the mechanisms of human thought. Both of these things are needed to create "artificial intelligence". We need to know how thinking happens; then we can create a hybrid which mimics thinking to the point where it is thinking.
.
- iQuestor
- The Gap Into Spam
- Posts: 2520
- Joined: Thu May 11, 2006 12:20 am
- Location: South of Disorder
well, I think the answer lies in truly defining (as if we could) the terms: life, thought, intelligence, and consciousness. And to apply them to machine life, thought, intelligence and consciousness, we'd have to define them without a human bias on our definition, which is impossible.Wayfriend wrote:Well, I have to disagree with that last post by Malik.
We can create a robot that walks. We start out by having it model how we walk, and try to do the same thing. In the end, though, it's successfully, actually walking.
It's not "simulating walking", it's walking. (I'm not talking about the awareness of walking, I'm talking about the physicality of walking.)
Of course, in the end, it's not "mimicry". It starts out with human limbs as a model, but because you are building it with struts and motors and chips and not bones and muscles and nerves. So it's a hybrid - it's a model of significant aspects of the human body, combined with the resources available to an artificial device.
Artificial intelligence would be exactly like that. We'd start out by modeling how we think, but we'd adapt it to be accomplished with CPUs and memory and buses instead of neurons and synapses and chemicles.
It's too easy to believe it will always be only mimicry, and not actuality. Why? Because we don't know how we think! So at this time what we are doing is copying a "black box" - trying to get the same output from the same input. That's only mimicry of human thought in the grossest means possible.
So of COURSE it looks like we can only simulate, and never actually create. We know almost nothing yet about what we are trying to create. It's like trying to create a robot that walks by observing women in hoop skirts from an airplane!
As assiduously as software engineers are working to make computers "smart", there are also neurological scientists trying to understand the mechanisms of human thought. Both of these things are needed to create "artificial intelligence". We need to know how thinking happens; then we can create a hybrid which mimics thinking to the point where it is thinking.
Life seems to be the easiest one, but still quite a poser. We are still biased because we only know of one planet where life arose, assuming it did arise here. By current definitions, Fire is alive and viruses arent.
Intelligence is a little harder to define, because again we are biased to deduce intelligence based on how we think abstractly and process that information.
Human Consciousness is far more difficult, nay impossible to define and model and plan for, at least right now.
Assume we could: then I agree that if we could deduce exactly what thought is, then we could replicate it; but I don't think we can anytime soon, because I beleive thought and consciousness are metaprocesses, if you will, of brain mechanics. Their form and structure lie above and between the synapses, emerge from the collective biomechanisms that govern instinct and sensory controls and everything else our grey matter does for us.
If we ever do build a silicon based computational engine that perfectly imitates (not reproduces, because we'd use different materials) exactly how the mechanics of thought works in our brains, I beleive that if thought, intelligence and consciousness did arise from that machine, it would do so without us knowing how or why it happened.
I think the brain is way to complex to be modeled by us right now. Its not just nerves and synapses, there are a lot of other things that probably come into play that color and flavor our thought, emotion and consciousness, things like instinct, urge, racial memory (if it exists) and all the other baggage we carry around as primates.
A machine intelligence built from silicone or some superior substance might mimic the mechanics of the components of thought, such as memory, retention, association, decision trees, etc, but it won't be human wetware and it won't have that primate baggage, urges, and instincts we carry around, so it wouldnt truly mimic how we think, and how we experience consciousness. After all, this isn't walking, but a far more complex behavior that is not purely mechanical.
so, I still agree with Malik.
Becoming Elijah has been released from Calderwood Books!
Korik's Fate
It cannot now be set aside, nor passed on...

Korik's Fate
It cannot now be set aside, nor passed on...

- wayfriend
- .
- Posts: 20957
- Joined: Wed Apr 21, 2004 12:34 am
- Has thanked: 2 times
- Been thanked: 6 times
So?iQuestor wrote:A machine intelligence built from silicone or some superior substance might mimic the mechanics of the components of thought, such as memory, retention, association, decision trees, etc, but it won't be human wetware and it won't have that primate baggage, urges, and instincts we carry around, so it wouldnt truly mimic how we think, and how we experience consciousness. After all, this isn't walking, but a far more complex behavior that is not purely mechanical.
I mean, that walking robot isn't burdened with the baggage of it's human model, either - it doesn't tire, it doesn't get cramps or blisters, etc. All that means is that it is walking, but not human-walking.
It helps because we define walking in such a way that we don't include tiring, blistering, and cramps as part of what it really is.
If we ever understand the mechanics of thought, we'd be able to understand it and define it in such a way that it'd be seperate from urges, instincts, emotions, and our other human baggage.
And that walking robot has it's own unique baggage: it's walking is informed by battery lifetimes, lubrication issues, the brittleness of its parts, etc. It'd have it's own bagage. It is walking, but it is not human-walking, it is robot-walking.
If we ever create artificial intelligence, it'll have it's own baggage that we have no human referent to. It'll be subject to a unique set of influences that reflect it's nature and circumstances. Maybe it'll be things that we build into it intentially, maybe it'll be a side effect of how we go about it.
Walking is an abstract. Humans demonstrate one implementation, robots another. Both are the same only in that they are Walking. But they are different as well.
Thinking is an abstract. Human-thinking is only one implementation. Artificial intelligence would be another. They are only the same in that they are Thinking. But they would be different as well.
.
- iQuestor
- The Gap Into Spam
- Posts: 2520
- Joined: Thu May 11, 2006 12:20 am
- Location: South of Disorder
But unless we define thinking in such a way as to not be biased and based on our own human methods, we won't be able to define and therefore re-create thinking in machine intelligence either, unless we totally duplicate a human, and then it won't be a machine, it will be a clone.Wayfriend wrote:So?iQuestor wrote:A machine intelligence built from silicone or some superior substance might mimic the mechanics of the components of thought, such as memory, retention, association, decision trees, etc, but it won't be human wetware and it won't have that primate baggage, urges, and instincts we carry around, so it wouldnt truly mimic how we think, and how we experience consciousness. After all, this isn't walking, but a far more complex behavior that is not purely mechanical.
I mean, that walking robot isn't burdened with the baggage of it's human model, either - it doesn't tire, it doesn't get cramps or blisters, etc. All that means is that it is walking, but not human-walking.
It helps because we define walking in such a way that we don't include tiring, blistering, and cramps as part of what it really is.
If we ever understand the mechanics of thought, we'd be able to understand it and define it in such a way that it'd be seperate from urges, instincts, emotions, and our other human baggage.
And that walking robot has it's own unique baggage: it's walking is informed by battery lifetimes, lubrication issues, the brittleness of its parts, etc. It'd have it's own bagage. It is walking, but it is not human-walking, it is robot-walking.
If we ever create artificial intelligence, it'll have it's own baggage that we have no human referent to. It'll be subject to a unique set of influences that reflect it's nature and circumstances. Maybe it'll be things that we build into it intentially, maybe it'll be a side effect of how we go about it.
Walking is an abstract. Humans demonstrate one implementation, robots another. Both are the same only in that they are Walking. But they are different as well.
Thinking is an abstract. Human-thinking is only one implementation. Artificial intelligence would be another. They are only the same in that they are Thinking. But they would be different as well.
right. name another.... hence our bias. there are no other implementations we know of. Machine intelligence, if it arose, might be totally different so that we wouldnt recognize or, or classify it as true thought or intelligence or consciousness.Human-thinking is only one implementation.
Becoming Elijah has been released from Calderwood Books!
Korik's Fate
It cannot now be set aside, nor passed on...

Korik's Fate
It cannot now be set aside, nor passed on...

Which is exactly what I'm talking about. We're using our bias to define what thinking, consciousness, and intelligence mean.
"There is only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences." - PJ O'Rourke
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________
- Loredoctor
- Lord
- Posts: 18609
- Joined: Sun Jul 14, 2002 11:35 pm
- Location: Melbourne, Victoria
- Contact:
I strongly disagree with this statement. Our neural networks compute - process - data. Of course you will argue that neural processing is vastly different to the processing in computers (0001001010 . . . vs firing of synapses), but most researchers into AI argue that binary processing is not the way to approximate the human mind. There has been many studies using weighted networks of switches that are very similar to neurons and can process language almost as well as we can. I don't see ordinary machines being used to 'run' AIs. I see artificial networks that compute just like we do. However, we're not just making AI - we're making synthetic brains.Malik23 wrote:Freewill isn't an algorithm. It's not a computation.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
- Fist and Faith
- Magister Vitae
- Posts: 25488
- Joined: Sun Dec 01, 2002 8:14 pm
- Has thanked: 9 times
- Been thanked: 57 times
Sheesh, you guys were busy today!! Well, first things first...
That's my point exactly. You said they will have to make the decisions for themselves, but I doubt they will be allowed to. Until those in charge of the project choose to give them each freedom, we will make those decisions for them. Alas, we have not proven particularly good at making decisions for ourselves. I can't imagine we can do better for a species whose existence is so different from our own.
Can you give a fairly comprehensive definition of the kind of intelligence you are talking about?
I was letting that one slide.Wayfriend wrote:No kidding. I actually said "know one".Fist and Faith wrote:Pffff. Amateur!Wayfriend wrote:I got know one to quote from ...

Ya think?!?Wayfriend wrote:Wow. I wouldn't want to be them! Sounds like your interested in inventing intelligent beings so that you can have slaves. That's not ethical!Fist and Faith wrote:The problem with your thoughts is that we will be in control when AI is created. We will decide if the first AI gets to exist outside of the handheld computer it is created within. We will decide what information it will be exposed to. And if we create more than one, we will decide whether or not they will interact, or even know of each others' existence.

Abso-freakin'-lutely!!!!Wayfriend wrote:Precisely. When that day happens, I'd rather they thought of us humans as their doddering old parents than their outwitted former tormentors and slavemasters.
It is difficult for us to even discuss this stuff, much less create AI, because of definitions. I am speaking of thinking-machines. I do not consider calculators to be intelligent to the slightest degree. The issue I'm talking about is creating something that can make decisions; has opinions; has free will (And to make the discussion even more difficult, Loremaster does not think free will exists); recognizes its own existence - then denying it the freedom to act on those abilities. This creation of ours need not be able to do complex calculations, have a perfect and/or huge memory, or any of the things we usually think of when we think of computers. If we don't want to call it AI, that's ok with me.Malik23 wrote:Guys, you are WAY too optimistic about the potentials of AI. Let's stop and think for a minute. Intelligence is certainly not the same thing as consciousness. Nor is intelligence the same thing as free will, or feelings, or likes/dislikes, or opinions, etc.
Can you give a fairly comprehensive definition of the kind of intelligence you are talking about?
All lies and jest
Still a man hears what he wants to hear
And disregards the rest -Paul Simon

Still a man hears what he wants to hear
And disregards the rest -Paul Simon

- iQuestor
- The Gap Into Spam
- Posts: 2520
- Joined: Thu May 11, 2006 12:20 am
- Location: South of Disorder
I said
Fisty said:
that's my whole point. We humans will need to find some universal definition of these terms: intelligence , thought, consciousness before we could apply them to machine analogs. But we can't, because A) we are human and B) we have no other examples to go on. hell, we can't even agree on definitions for Life and Planet (poor Pluto
).
cail said:
And if we do recognize that it is intelligent, we will either A) mistreat it as a lower class being, B) ignore it and let it loose on the internet or C) worship it as a higher power. -- I dont see any of those possibilities as being a good thing for humanity.
well, I think the answer lies in truly defining (as if we could) the terms: life, thought, intelligence, and consciousness. And to apply them to machine life, thought, intelligence and consciousness, we'd have to define them without a human bias on our definition, which is impossible.
Fisty said:
Can you give a fairly comprehensive definition of the kind of intelligence you are talking about?
that's my whole point. We humans will need to find some universal definition of these terms: intelligence , thought, consciousness before we could apply them to machine analogs. But we can't, because A) we are human and B) we have no other examples to go on. hell, we can't even agree on definitions for Life and Planet (poor Pluto

cail said:
right! problem is, we can't help but be biased; humans are our only example of these concepts we recognize (possibly its all around us, but we don't recognize it, because its not human). This will create an issue if/when some neural network or artificial brain or quantum computer we create attains consciousness -- we might not recognize or classify it as such. because it's not human, not because it isn't alive and conscious by a broader, less human-centric definition.Which is exactly what I'm talking about. We're using our bias to define what thinking, consciousness, and intelligence mean.
And if we do recognize that it is intelligent, we will either A) mistreat it as a lower class being, B) ignore it and let it loose on the internet or C) worship it as a higher power. -- I dont see any of those possibilities as being a good thing for humanity.
Becoming Elijah has been released from Calderwood Books!
Korik's Fate
It cannot now be set aside, nor passed on...

Korik's Fate
It cannot now be set aside, nor passed on...

- Zarathustra
- The Gap Into Spam
- Posts: 19845
- Joined: Tue Jan 04, 2005 12:23 am
- Has thanked: 1 time
- Been thanked: 1 time
You're right. But that's an external action. We're talking about the assumption of an inner quality (consciousness) based solely on the external appearance of a computer's output. That's completely different from walking. There is no inner component to physical or mechanical actions.Wayfriend wrote:It's not "simulating walking", it's walking. (I'm not talking about the awareness of walking, I'm talking about the physicality of walking.)
I agree completely. I'm not saying that we won't ever build conscious machines. It just won't be until we understand how our own consciousness arises from matter. And those machines will NOT be classical computers.Wayfriend wrote: . . . we don't know how we think! So at this time what we are doing is copying . . . So of COURSE it looks like we can only simulate, and never actually create. We know almost nothing yet about what we are trying to create.
I agree. I think consciousness is a holistic phenomenon. But not only is the sum greater than the parts, I don't think we even understand the parts. It's more than just neurons firing electrical signals. The Penrose book I keep mentioning, SHADOWS OF THE MIND, talks about cytoskeletons--structures smaller than neurons that have organizations based on individual molecules. These structures are small enough to retain quantum effects. Computers rely upon classical physics. But consciousness behaves a lot more like a quantum phenomenon.iQuestor wrote:I beleive thought and consciousness are metaprocesses, if you will, of brain mechanics. Their form and structure lie above and between the synapses, emerge from the collective biomechanisms that govern instinct and sensory controls and everything else our grey matter does for us.
How are physical, external actions abstract?Wayfriend wrote: Walking is an abstract.
Of course we are. What's wrong with that? That's like saying we're using our own sun to study how stars generate light and heat. Why wouldn't we look at the only things we know are conscious in order to determine what consciousness is? On the other hand, why would we assume that AI computers were conscious when they have nothing in common with the one example of consciousness we know so well? I'm not saying that we should let our bias blind us to dismissing the issue of consciousness within a nonhuman entity. I'm saying that we shouldn't let our fear of making a bias-based mistake drive us to give machines the benefit of the doubt simply so we don't appear closed-minded. That's not a good enough reason to attribute consciousness to a machine.Cail wrote: We're using our bias to define what thinking, consciousness, and intelligence mean.
Our neural networks may be capable of carrying the computations we perform in our thoughts, but computations isn't what neurons do. Sensory input isn't data. Data is pure information. Data is the abstract formalization of input into binary numbers. A photon striking my retina isn't data. The electrical impulses which register this impact aren't data, either. Nowhere in our neurons is physical input translated into information. That is done at a higher level than the neurons themselves. That is done in our mind, our thoughts, our understanding.Loremaster wrote: Our neural networks compute - process - data.
But turning sensory input into formalized information is just one of the things we do with our consciousness. Most of what we think, feel, and understand has nothing whatsoever to do with computation. You're just assuming that our brains act like computers. But love isn't built out of computations, no matter what is happening on the neuron level. And freewill isn't, either. Irrational processes do not derive from computation.
I agree that definitions are a large part of the problem. People have been hearing the term "artificial intelligence" for so long, and reading about it in fiction, that they've started to believe that this term means computers will be able to think like we do--or think at all. AI has never been about making conscious machines, because we don't have the slightest clue how to produce consciousness. No, AI has always been about mimicking human actions, the output of our conscious thought. Creating machines which can respond "intelligently" to their environment is a completely separate issue from creating a machine which has a mind. A mind is something which can NEVER manifest itself in external action. Subjectivity itself can never be externalized.Fist and Faith wrote: It is difficult for us to even discuss this stuff, much less create AI, because of definitions. I am speaking of thinking-machines. I do not consider calculators to be intelligent to the slightest degree. The issue I'm talking about is creating something that can make decisions; has opinions; has free will (And to make the discussion even more difficult, Loremaster does not think free will exists); recognizes its own existence - then denying it the freedom to act on those abilities. This creation of ours need not be able to do complex calculations, have a perfect and/or huge memory, or any of the things we usually think of when we think of computers. If we don't want to call it AI, that's ok with me.
Can you give a fairly comprehensive definition of the kind of intelligence you are talking about?
If you're talking about creating something that has opinions and freewill, then you're not talking about AI. We have no idea how to create a machine with opinions. No one is even working on that problem. (Why on earth would you need a machine with opinions?) Nor are we working on creating machines with freewill. Freewill is different from simply making decisions. As I said, our computer software already makes decisions all the time. My computer does tons of stuff that I don't tell it to do. It monitors programs which attempt to use the Internet, and then decides whether or not to let them access it. But this decision process isn't freewill. Nor can any algorithm compose freewill--because then it wouldn't be free. Freewill allows us to act irrationally. Computers can't act irrationally unless they are malfunctioning. Freewill isn't a malfunction.
My "comprehensive" definition of intelligence began with my Einstein example. It includes insight and understanding. The ability to perceive the inner workings of our world, to peel back the layers of appearance and prejudice to see deeper truths. Today, my 6-yr-old asked, "Why are there sewers underground?" I think that the ability to ask questions, to be curious, to challenge one's conception of reality--that's intelligence. It has nothing to do with blind mechanical processes. And that's why blind mechanical processes aren't intelligent. They can be rational. But rationality isn't intelligence. Rationality is merely a tool we use. Rationality can be employed without consciousness at all. Thus, intelligence requires consciousness. It is something that only conscious creatures can acquire (though they don't have to acquire it--many conscious creatures aren't intelligent). While consciousness is a prerequisite for intelligence, the appearance of intelligence doesn't itself imply consciousness. But appearance of intelligence is all that computers can manage--unless you build consciousness in them from the beginning. And we don't have a clue how to do that.
Success will be my revenge -- DJT
The issue is that our bias (hubris and arrogance work too) does tend to blind us to other possibilities. Certainly we shouldn't jump to the PC extremes you're suggesting could happen, but we have a very, very limited experience with what the definition of terms like consciousness is, and what the nature and manifestations of consciousness are.Malik23 wrote:Of course we are. What's wrong with that? That's like saying we're using our own sun to study how stars generate light and heat. Why wouldn't we look at the only things we know are conscious in order to determine what consciousness is? On the other hand, why would we assume that AI computers were conscious when they have nothing in common with the one example of consciousness we know so well? I'm not saying that we should let our bias blind us to dismissing the issue of consciousness within a nonhuman entity. I'm saying that we shouldn't let our fear of making a bias-based mistake drive us to give machines the benefit of the doubt simply so we don't appear closed-minded. That's not a good enough reason to attribute consciousness to a machine.Cail wrote: We're using our bias to define what thinking, consciousness, and intelligence mean.
"There is only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences." - PJ O'Rourke
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________
_____________
"Men and women range themselves into three classes or orders of intelligence; you can tell the lowest class by their habit of always talking about persons; the next by the fact that their habit is always to converse about things; the highest by their preference for the discussion of ideas." - Charles Stewart
_____________
"I believe there are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." - James Madison
_____________