Zarathustra wrote:I think that people fear AI because they imagine it will do tbings we don't want it to do, and take over. But as smart as these programs will be, they won't actually want things. They won't have their own goals. They won't be able to decide that they're better off without these pesky humans.
We're already using AI with absolutely no negative consequences. Your texting app uses AI to anticipate what words you're trying to type. Anyone frightened of their texting app suddenly deciding to text people without your permission? Much less take over the world? Of course not. But if AI is so unpredictable and uncontrollable, why do we all use it daily without worrying about it?
It's like the people who worry about the government putting chips in us and tracking our every move ( I know people like this personally) but don't worry about the chip they carry around with them everywhere they go. Cell phones already present all the dangers they allegedly fear, but they don't worry about them. AI will be like that. It already is.
First note: I'm not really one of those worried much about AI run amok on its own...though I'm pretty worried about the uses a smart bad actor could put even limited AI to. I'm SURE someone out there is working on an AI hacker...and just like the Go-playing AI did things humans never thought of [and don't know exactly how/why the AI "thought" of it], just like a couple of medical AI's have made discoveries humans didn't even though they had the same data---things could get interesting. And I don't even consider these machines to be intelligent. [[I'm not sure anyone does, really---do they?]]
But to play Devil's advocate---they don't have to actually "want" anything. They just need an instruction or set of instructions that is badly written---a "goal" or "purpose" that isn't constrained appropriately, either on purpose or because of unintended/unexpected process/path branching.
Also, a general AI is a far different beast than the "idiot savant" things we're building now. If/when a general system becomes possible---well, it STILL won't need ACTuAL consciousness, or ACTUAL wants, desires, "divine" or other purposes. It just needs a couple lines of code to make it "believe" it has those things.
On the last---I'm not paranoid, but I DO worry about the cell capabilities. And that of all my devices. Anyone who isn't a little worried about those things is a fool. But also, there is mostly fuck-all you can do about it. I do a few of things to make it HARDER, but my efforts are almost surely nothing more than trivial/annoying to the powers that be. The "targeted ads" I get are much more off-target than the ones most of my friends get---but they aren't ALL off-target. And it isn't the abilities of the devices/AI directly I worry about. It's the fact that all of that info is available to fuckheads and sociopaths. Human ones. Human ones that don't in any way have my needs, desires, or best interests in mind. The exact opposite.
Pretty sure I've seen you say something like "companies use that data to provide you things services you want more efficiently, what's wrong with that?"
There are two problems I see with that:
The first is, the information they gather doesn't BELONG to them...also, they don't just keep it, and they don't keep it secure. They both share/sell it AND totally SUCK at preventing it from being stolen. They barely try.
[[living with the fact of data theft is cheaper and easier and relatively harmless---for THEM---than protecting it.]]
The second is: I think you underestimate the extent to which they can flip/manipulate, "provide you things you want" and "make you want what they provide." And I mean "make" in the strong sense. Creating/causing a want. There's a continuum from mere advertising to propaganda to brainwashing/thought-control, and drawing any hard lines is a tricky proposition. But current techniques are quite a bit closer to the second and last than the first in many ways. And a decent AI [even a limited "expert system"] makes it easier. And those are already up and running.