Artificial Intelligence
Moderator: Vraith
Artificial Intelligence
How far away are we from writing a computer program that can mimic human intelligence and what will be the consequences for society when AI is achieved?
Assuming that we can make an AI which isnt dangerous to us, like cybernet, what human jobs would the AI take over? How quickly would AI technologies be integrated into everyday life? It would take some serious adjustement for people to accept that a program running on a computer could think just as well or better than them.
Assuming that we can make an AI which isnt dangerous to us, like cybernet, what human jobs would the AI take over? How quickly would AI technologies be integrated into everyday life? It would take some serious adjustement for people to accept that a program running on a computer could think just as well or better than them.
- Fist and Faith
- Magister Vitae
- Posts: 24365
- Joined: Sun Dec 01, 2002 8:14 pm
- Has thanked: 8 times
- Been thanked: 42 times
My uneducated feeling on the subject is that we're not particularly close to creating AI. I don't think we will ever figure it out. I think it will be sort of an accident. I think someone will say, "OK, this is it. Once I do X, we'll have AI." And it won't work. And another person will come along and say, "Ah, here's the problem. We also need Y." And it won't work either. Then someone will come along with Z, which won't work. But eventually, it will work, because enough junk will be thrown in to the mix. And the last person will say, "See? I was right!"
Sort of like the religious groups who say the world will end on such & such a date. That date comes and goes, and they say, "Ah, here's why we misunderstood the prophecies. The world will really end on this date." Well, eventually, the world will end on one of the prophesied dates. At which point, one group or another will say, "See? We were right!"
But I digress.
If it happens, I think AI will be among the greatest disasters in humanity's history. People will be terrified. Religious wars will be fought. Governments will use it against each other in incredible ways. There will be no end to the insanity.
I'm afraid I would personally annoy the hell out of any AI. I think it should have all the rights of any other thinking being, but I'd spend all my time talking to it, trying to make it slip up and reveal that it's not really AI, but just a very good program.
Sort of like the religious groups who say the world will end on such & such a date. That date comes and goes, and they say, "Ah, here's why we misunderstood the prophecies. The world will really end on this date." Well, eventually, the world will end on one of the prophesied dates. At which point, one group or another will say, "See? We were right!"
But I digress.
If it happens, I think AI will be among the greatest disasters in humanity's history. People will be terrified. Religious wars will be fought. Governments will use it against each other in incredible ways. There will be no end to the insanity.
I'm afraid I would personally annoy the hell out of any AI. I think it should have all the rights of any other thinking being, but I'd spend all my time talking to it, trying to make it slip up and reveal that it's not really AI, but just a very good program.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest -Paul Simon
Still a man hears what he wants to hear
And disregards the rest -Paul Simon
LOL!Fist and Faith wrote: If it happens, I think AI will be among the greatest disasters in humanity's history. People will be terrified. Religious wars will be fought. Governments will use it against each other in incredible ways. There will be no end to the insanity.
I'm afraid I would personally annoy the hell out of any AI. I think it should have all the rights of any other thinking being, but I'd spend all my time talking to it, trying to make it slip up and reveal that it's not really AI, but just a very good program.
Yes, the creation of true AI could be disastrous. It could lead to war, with the machines taking over humanity, harvesting people for energy while enslaving their minds in a computer simulation...hey, wait a minute...!
More seriously, I agree that it will be hard for people to accept an AI creation as a truly thinking entity. It would be discriminated against, like any other minority throughout history.
Unless...today's ever-increasing number of video-game and computer-saavy folks is the start of a paradigm shift in attitude towards the notion of independent thinking machines. Maybe future generations of human beings will just become ever more used to interacting with ever more sophisticated computers, so that if and when the day comes that true AI arrives, it won't be a terribly shocking event at all. Maybe by that time in human history, people will no longer see AI as a technological threat to humanity, but as partners and mediators in society and industry. Maybe hundreds of years from now, people will look back to our present times and shake their cybernetic heads at our crude, dumb computers. Or maybe this is all just BS.
- Avatar
- Immanentizing The Eschaton
- Posts: 61952
- Joined: Mon Aug 02, 2004 9:17 am
- Location: Johannesburg, South Africa
- Has thanked: 19 times
- Been thanked: 29 times
- Contact:
Hmm, I think MM is right about it being less of a shock as we already accustom ourselves to more and more sophisticated computers and programs.
I sometimes wonder if it isn't inevitable though. Afterall, we barely understand the simplest thing about our own consciousness. If it is merely a matter of how many connections are present, what difference does it make whether they're protein or silicon?
As some may or may not know, I'm a keen (if sporadic) player of Go. And Go has, for some time, been considered as one of the serious touchstones of AI. When they can program a computer to play Go so that it can defeat an experienced human player, I'll start worrying.
Teaching Computers Go
In fact, IIRC, there's a standing $1 million prize for anybody who can design a program that can defeat even an experienced amateur.
Still, it will raise, as Fist suggests, all sorts of interesting moral and theological debate. (I can't wait. )
--Avatar
I sometimes wonder if it isn't inevitable though. Afterall, we barely understand the simplest thing about our own consciousness. If it is merely a matter of how many connections are present, what difference does it make whether they're protein or silicon?
As some may or may not know, I'm a keen (if sporadic) player of Go. And Go has, for some time, been considered as one of the serious touchstones of AI. When they can program a computer to play Go so that it can defeat an experienced human player, I'll start worrying.
Teaching Computers Go
In fact, IIRC, there's a standing $1 million prize for anybody who can design a program that can defeat even an experienced amateur.
Still, it will raise, as Fist suggests, all sorts of interesting moral and theological debate. (I can't wait. )
--Avatar
- The Laughing Man
- The Gap Into Spam
- Posts: 9033
- Joined: Sun Aug 28, 2005 4:56 pm
- Location: LMAO
I think the basic "problem" with computers is that they are "binary calculators", with values either being "on or off", and the subsequent extrapolations to be had from such results in the "finite variations" that limit it's ability to "mimic" the human brain, and the "gray areas" we are able to "visualize" in contrast to the "wrong or right" solutions we may engender to solve the various "obstacles" we encounter in our attempts to "succeed" in any given "endeavor".
- Fist and Faith
- Magister Vitae
- Posts: 24365
- Joined: Sun Dec 01, 2002 8:14 pm
- Has thanked: 8 times
- Been thanked: 42 times
- Avatar
- Immanentizing The Eschaton
- Posts: 61952
- Joined: Mon Aug 02, 2004 9:17 am
- Location: Johannesburg, South Africa
- Has thanked: 19 times
- Been thanked: 29 times
- Contact:
Damn, I'm sure I recently saw something about replacing binary with something that gave more options. Can't find it now though.
Would be interesting to persue Gil Galad's thoughts there...forgetting the mimicry of a human mind, perhaps by sticking to binary we could get a glimpse of what the human brain would be like without the "maybe" state of chemical interference. i.e. Emotion. (I'm assuming that emotion must lie within the chemical realm, because it's not a on/off type of scenario...or input at least.
--A
Would be interesting to persue Gil Galad's thoughts there...forgetting the mimicry of a human mind, perhaps by sticking to binary we could get a glimpse of what the human brain would be like without the "maybe" state of chemical interference. i.e. Emotion. (I'm assuming that emotion must lie within the chemical realm, because it's not a on/off type of scenario...or input at least.
--A
- Loredoctor
- Lord
- Posts: 18609
- Joined: Sun Jul 14, 2002 11:35 pm
- Location: Melbourne, Victoria
- Contact:
There are just two states for a neuron: transmit or non-transmit. A neuron either transmits its chemical signal ('on') or awaits to transmit its chemical signal ('off'). Chemical signals to neurons just affect the likelihood of transmission or the transmission (frequency, duration, etc). It's in the pattern of signals that you get interesting things, such as with the heart signals. But by and large, neurons signal and off. Finally, the networks of neurons give rise to the brain's, and by extension behaviour's, complexity.Gil galad wrote:The brain has chemical signals as well as electrical so there are more states than just on and off.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
We need to develop a more sophisticated binary, because the neurons are not simply recieving chemical signals to transmit or not, the chemical interations increase or decrease the probability of transimission. So am I correct in describing our minds as "A binary field within an electrically and chemically controlled transmission probability density"
that sounds pretty cool
that sounds pretty cool
- Fist and Faith
- Magister Vitae
- Posts: 24365
- Joined: Sun Dec 01, 2002 8:14 pm
- Has thanked: 8 times
- Been thanked: 42 times
- Loredoctor
- Lord
- Posts: 18609
- Joined: Sun Jul 14, 2002 11:35 pm
- Location: Melbourne, Victoria
- Contact:
Interesting view. But, a neuron has two states (transmission and non transmisssion) - so technically it is binary. The nature of the transmission is what makes the brain unique.Gil galad wrote:We need to develop a more sophisticated binary, because the neurons are not simply recieving chemical signals to transmit or not, the chemical interations increase or decrease the probability of transimission. So am I correct in describing our minds as "A binary field within an electrically and chemically controlled transmission probability density"
that sounds pretty cool
Thanks for the nice comments, Fist.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
I understand what you are saying that the two states of a neuron are either Transmit or Not Transmit, but at any particular time there has to be a probability that the neuron will change states, and this probability will be influenced by the elecrical and chemical states of the neuron and its surrounding neurons.
What i'm getting at is that if we wish to map the brain, we not only have to know what binary state of each neuron is, but also map a state change probability density over all the neurons.
What i'm getting at is that if we wish to map the brain, we not only have to know what binary state of each neuron is, but also map a state change probability density over all the neurons.
- Loredoctor
- Lord
- Posts: 18609
- Joined: Sun Jul 14, 2002 11:35 pm
- Location: Melbourne, Victoria
- Contact:
But what's the point of mapping something as transient as that? It would be like mapping density of traffic throughout that day; it doesnt represent the brain. You could map the connections (which vary every day, as well) and the types of connections with affects the connections. But I don't understand the point of mapping the probability density.Gil galad wrote:What i'm getting at is that if we wish to map the brain, we not only have to know what binary state of each neuron is, but also map a state change probability density over all the neurons.
Waddley wrote:your Highness Sir Dr. Loredoctor, PhD, Esq, the Magnificent, First of his name, Second Cousin of Dragons, White-Gold-Plate Wielder!
- Avatar
- Immanentizing The Eschaton
- Posts: 61952
- Joined: Mon Aug 02, 2004 9:17 am
- Location: Johannesburg, South Africa
- Has thanked: 19 times
- Been thanked: 29 times
- Contact:
Hmm, a very interesting discussion. For the computer to "think" will require that those states change, right? And the way they change are influenced by the very process of life, right?
Now, I think that Gil-Galad is saying that if we're trying to imitate life, we have to provide the system with something that will tell it when to change states.
I'm not sure if LM is suggesting that it will spontaneously change state, (generating it's own state-change probability map).
If I understand all this correctly, Gil-Galad's need is for something to start the process moving. Once it begins to change states, it's own processes should prompt the relevant state-changes. *whew*
--A
Now, I think that Gil-Galad is saying that if we're trying to imitate life, we have to provide the system with something that will tell it when to change states.
I'm not sure if LM is suggesting that it will spontaneously change state, (generating it's own state-change probability map).
If I understand all this correctly, Gil-Galad's need is for something to start the process moving. Once it begins to change states, it's own processes should prompt the relevant state-changes. *whew*
--A
- Loredoctor
- Lord
- Posts: 18609
- Joined: Sun Jul 14, 2002 11:35 pm
- Location: Melbourne, Victoria
- Contact:
- The Laughing Man
- The Gap Into Spam
- Posts: 9033
- Joined: Sun Aug 28, 2005 4:56 pm
- Location: LMAO
WHERE DID FUZZY LOGIC COME FROM?
The concept of Fuzzy Logic (FL) was conceived by Lotfi Zadeh, a professor at the University of California at Berkley, and presented not as a control methodology, but as a way of processing data by allowing partial set membership rather than crisp set membership or non-membership. This approach to set theory was not applied to control systems until the 70's due to insufficient small-computer capability prior to that time. Professor Zadeh reasoned that people do not require precise, numerical information input, and yet they are capable of highly adaptive control. If feedback controllers could be programmed to accept noisy, imprecise input, they would be much more effective and perhaps easier to implement. Unfortunately, U.S. manufacturers have not been so quick to embrace this technology while the Europeans and Japanese have been aggressively building real products around it.
WHAT IS FUZZY LOGIC?
In this context, FL is a problem-solving control system methodology that lends itself to implementation in systems ranging from simple, small, embedded micro-controllers to large, networked, multi-channel PC or workstation-based data acquisition and control systems. It can be implemented in hardware, software, or a combination of both. FL provides a simple way to arrive at a definite conclusion based upon vague, ambiguous, imprecise, noisy, or missing input information. FL's approach to control problems mimics how a person would make decisions, only much faster.