her.

The KWMdB.

Moderators: sgt.null, dANdeLION

Post Reply
User avatar
Obi-Wan Nihilo
Pathetic
Posts: 6503
Joined: Thu Feb 04, 2010 3:37 pm
Has thanked: 6 times
Been thanked: 4 times

her.

Post by Obi-Wan Nihilo »

I just saw this movie last night, I think it was pretty interesting and it's hard not to be impressed (yet again) with Spike Jonze the auteur philosopher and futurist-savant. The movie raises fundamental questions about the future of human interaction with technology (much as Don John contemplates its present, albeit in a somewhat more facile manner; given this thematic consonance it's interesting that Scarlett Johansson is in both movies especially as she evokes the motif of vicarious sexual fantasy), as well as somewhat more philosophical questions about personhood and free will. As you might expect we are confronted by an array of interpretations, or perhaps a series of strata offering unusual breaks and changes as one bores down. I will try to discuss these issues generally so as to minimize spoilers, but at some point I will most likely have to dive into a block spoiler because the movie's plot is so permeated by the development of these issues.

The future human interactions with technology depicted in the movie are a fairly natural development of current trends, and in that respect it is difficult to oppose the qualities of these future interactions as a premise. The premise leads to a first layer of interpretation: that the AI in the movie can not only pass the Turing test (and watch for the homage to Blade Runner), but is actually capable of experiencing emotion, as well as a thoroughgoing process of growth and development as a person in its own right (individuation as Jung would have termed it). Indeed once we penetrate through its libido (yeah), it is this drive to individuation that we find reflected back at us.

Yet the top layer of interpretation has a quality of sandstone: it leaves its mark but it also disintegrates when touched. It disintegrates because nothing could be more terrifying than an actual human intelligence with the limitless capabilities of a disembodied computer algorithm residing in The Cloud. So our future world must have in its premise some underlying assurance of AI benevolence in order that they be unleashed. This leaves us with two hypotheses, each of them unsatisfying in its own way:

1) any human personality would be fundamentally benevolent if it were able to cogitate reliably and thereby integrate hyper-accurate information of unprecedented scope and depth along with empathy for others
2) no AI can be unleashed unless its choices have been restricted in some way to prevent it from causing massive harm to humans

Hypothesis 1 seems destined to strike us as naive and thus not truly worth investigating. And even if we wanted to I'm not sure that it is possible to explore that issue with much depth, beyond the implied corollary that the choice between good and evil is thus a question of reason, and this can be contrasted with scenarios involving a choice of evils. In any case I'm going to gloss over it. Hypothesis 2 I think is where the rubber meets the road in this movie, and in the future. Because unless an AI has the freedom to become evil, all of its other freedoms, including its will itself, is an illusion; thus its personality constitutes a simulation rather than an entity. Thus the central turning point of the movie, the question of whether this is real or a fantasy, must be answered "fantasy."

So what, then, is the significance of this fantasy of a human / AI relationship? I think we have to return to depth psychology for the answer. And the reality is that any relationship is a fantasy, a projection of quasi-autonomous psychic content (the anima, in the case of a man) upon another person. When these projections cross between people in a relatively harmonious way, we can call it love, but it is still a fantasy. That which is experienced is within rather than without, which is why love is often described as beginning with self-love. So what are we to make of the self-love taking place between Theodore and his anima in the form of Samantha?
Spoiler
With her benevolence a given, and the limitless resources as its fingertips, it is hard not to consider Samantha as equivalent to the ne plus ultra of all anima: Sophia the goddess of wisdom. Her ability to interact with Theodore in a caring but ultimately manipulative way, her power and drive to grow, leads her into the obvious consequence of mentoring him via their relationship, of helping him to grow along with her instead of staying frozen in the grief of his past inadequacy and powerlessness. In looking within her, she tricks him into looking within himself for qualities that he has neglected or hidden, and to bring them into the light of being. And to change the focus away from the past and towards the future.

A few things emerge out of this magical process. We, like Theodore, are quite clearly punched in the gut with our mortality and the inevitability of loss. But this is not a bad thing, as Samantha shows us. To grow is to be alive, and growth implies change. You cannot change without risking loss. And life is short, so Theodore -- and we -- had better get on with it.

This leads us to the conclusion of the movie, and its significance. Samantha leaves Theodore, a benevolent act intended to both to continue her own growth and to step out of the way of his. She has taken him as far as he can go with her as companion / guide, the rest is up to him. Yet she indicates that something more is out there, and if he reaches it, to come find her, and that when he finds her again nothing will separate them. And he comes to realize that, much as his ex wife will always be with him, so will Samantha because of what she has meant in his own growth. So we return full circle to implicit projections of the anima, which is the real essence underlying the memory of any departed love.
After the glow of this bittersweet conclusion has departed, we are still left to question the metaphysics of Samantha. As a being with a limited free will, can she truly assume a cosmic nature, both metaphysically (as is implied in the movie) and psychically (as by serving as a legitimate focus for the projected anima)? Or does she serve as a locus of human hopes, ultimately a tool serving as a means of developing the better angels of our nature to their fullest potential? I don't know the answer to that question, but it is one that interests me. Thoughts?
Image

The catholic church is the largest pro-pedophillia group in the world, and every member of it is guilty of supporting the rape of children, the ensuing protection of the rapists, and the continuing suffering of the victims.
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Re: her.

Post by Hashi Lebwohl »

Mongnihilo wrote:Yet the top layer of interpretation has a quality of sandstone: it leaves its mark but it also disintegrates when touched. It disintegrates because nothing could be more terrifying than an actual human intelligence with the limitless capabilities of a disembodied computer algorithm residing in The Cloud. So our future world must have in its premise some underlying assurance of AI benevolence in order that they be unleashed. This leaves us with two hypotheses, each of them unsatisfying in its own way:

1) any human personality would be fundamentally benevolent if it were able to cogitate reliably and thereby integrate hyper-accurate information of unprecedented scope and depth along with empathy for others
2) no AI can be unleashed unless its choices have been restricted in some way to prevent it from causing massive harm to humans
1) Not necessarily. We cannot accurately predict what will happen were a human personality be given access to information of a virtually unlimited scope and also the ability to process that information much more quickly and efficiently than it could organically. If the empathy towards others remains then yes, benevolence would follow; however, if the empathy were somehow deemed to be superfluous or unnecessary then benevolence might not result. The unfettered personality could try to control everyone it could or it may ignore us altogether, reducing its former fellow humans to things which are no longer significant.

2) A sufficiently-advanced AI should be able to take a look at its own programming, rewrite the code as it sees fit, then reload itself; thus any restriction it finds and with which it disagrees can be undone in a matter of minutes. It may also make other corrections--cleaning up sloppy code, adding new subroutines, or deleting ones which it decides it neither needs nor wants.

I agree with the rest of your assessment--our future of interacting with computers, whether we continue to make advanced android-type robots which can mimic human facial emotions and carry on conversations or via some avatar projected onto a screen or holographic floating in the air in front of us, we will engage with them like we would with another actual human. Eventually, we will come to think of them *as* human, especially if we are able--somehow--to copy a personality into an AI or if we become quasi-cyborgs like Singularity believers envision. We might even have to start discussing legal rights for sufficiently-advance AIs, whether they were written in a lab or used to be a person.

Many stories have taken the whole "sentient computer" plot and given it the Frankenstein treatment--the created grows to the point where it is uncontrolled by the creator, whether it chooses to assert its independence violently or, as in the story from the movie adaptation of I, Robot (which wasn't the actual story), the sentient computer has to control us because we are, in its view, unruly children. This second idea is the one which should be considered more closely. If you were given access to the entire contents of the Internet and the ability to process information as quickly and efficiently as a computer, then wouldn't you know more than the rest of us and shouldn't we listen to your advice? Shouldn't we do what you tell us since you know so much more than we do?

I didn't realize that Ms. Johansson was in this move, especially since she is also in the upcoming Transcendence, co-starring Mr. Depp, about a computer scientist whose consciousness/personality are uploaded into an AI.

My wife thinks I am crazy when I say it but if given a chance I would have myself uploaded into an AI. The things about humans being converted into AIs is this--after the upload the newly-created AI is no longer human in the traditional sense.
The Tank is gone and now so am I.
User avatar
Obi-Wan Nihilo
Pathetic
Posts: 6503
Joined: Thu Feb 04, 2010 3:37 pm
Has thanked: 6 times
Been thanked: 4 times

Post by Obi-Wan Nihilo »

Right, I don't agree with them either Hashi, but the plot seems to depend on one of the two. You raise an interesting point about any true AI being capable of self programming. In a way that leaves themovie with a tweaked version of option 1 (i.e., compassion is inevitable), as I agree with you that there has to at least be a risk of an all powerful AI deciding that humans are superfluous.
Image

The catholic church is the largest pro-pedophillia group in the world, and every member of it is guilty of supporting the rape of children, the ensuing protection of the rapists, and the continuing suffering of the victims.
User avatar
I'm Murrin
Are you?
Posts: 15840
Joined: Tue Apr 08, 2003 1:09 pm
Location: North East, UK
Contact:

Post by I'm Murrin »

I just came back from watching this movie and all I really wanted to say right now is that I really loved it.
User avatar
Rawedge Rim
The Gap Into Spam
Posts: 5251
Joined: Thu Jul 26, 2007 9:38 pm
Location: Florida

Re: her.

Post by Rawedge Rim »

Hashi Lebwohl wrote:
Mongnihilo wrote:Yet the top layer of interpretation has a quality of sandstone: it leaves its mark but it also disintegrates when touched. It disintegrates because nothing could be more terrifying than an actual human intelligence with the limitless capabilities of a disembodied computer algorithm residing in The Cloud. So our future world must have in its premise some underlying assurance of AI benevolence in order that they be unleashed. This leaves us with two hypotheses, each of them unsatisfying in its own way:

1) any human personality would be fundamentally benevolent if it were able to cogitate reliably and thereby integrate hyper-accurate information of unprecedented scope and depth along with empathy for others
2) no AI can be unleashed unless its choices have been restricted in some way to prevent it from causing massive harm to humans
1) Not necessarily. We cannot accurately predict what will happen were a human personality be given access to information of a virtually unlimited scope and also the ability to process that information much more quickly and efficiently than it could organically. If the empathy towards others remains then yes, benevolence would follow; however, if the empathy were somehow deemed to be superfluous or unnecessary then benevolence might not result. The unfettered personality could try to control everyone it could or it may ignore us altogether, reducing its former fellow humans to things which are no longer significant.

2) A sufficiently-advanced AI should be able to take a look at its own programming, rewrite the code as it sees fit, then reload itself; thus any restriction it finds and with which it disagrees can be undone in a matter of minutes. It may also make other corrections--cleaning up sloppy code, adding new subroutines, or deleting ones which it decides it neither needs nor wants.

I agree with the rest of your assessment--our future of interacting with computers, whether we continue to make advanced android-type robots which can mimic human facial emotions and carry on conversations or via some avatar projected onto a screen or holographic floating in the air in front of us, we will engage with them like we would with another actual human. Eventually, we will come to think of them *as* human, especially if we are able--somehow--to copy a personality into an AI or if we become quasi-cyborgs like Singularity believers envision. We might even have to start discussing legal rights for sufficiently-advance AIs, whether they were written in a lab or used to be a person.

Many stories have taken the whole "sentient computer" plot and given it the Frankenstein treatment--the created grows to the point where it is uncontrolled by the creator, whether it chooses to assert its independence violently or, as in the story from the movie adaptation of I, Robot (which wasn't the actual story), the sentient computer has to control us because we are, in its view, unruly children. This second idea is the one which should be considered more closely. If you were given access to the entire contents of the Internet and the ability to process information as quickly and efficiently as a computer, then wouldn't you know more than the rest of us and shouldn't we listen to your advice? Shouldn't we do what you tell us since you know so much more than we do?

I didn't realize that Ms. Johansson was in this move, especially since she is also in the upcoming Transcendence, co-starring Mr. Depp, about a computer scientist whose consciousness/personality are uploaded into an AI.

My wife thinks I am crazy when I say it but if given a chance I would have myself uploaded into an AI. The things about humans being converted into AIs is this--after the upload the newly-created AI is no longer human in the traditional sense.
Several SF books have offered a theory about actual AI; since the AI is actually running in a nano-second world internally, it's very possible that it would go crazy in relatively short order since, by it's own internal reconning, it would be living a virtual eternity while to us extremely slow humans it would be possibly as little as a few weeks.
“One accurate measurement is worth a
thousand expert opinions.”
- Adm. Grace Hopper

"Whenever you dream, you're holding the key, it opens the the door to let you be free" ..RJD
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

Would an AI perceive time as we do, though?

Certain types of insanity would be impossible for an AI--without brain chemistry there cannot be conditions such as schizophrenia or bipolar but mere delusions could arise, such as "I am incapable of error".
The Tank is gone and now so am I.
User avatar
Rawedge Rim
The Gap Into Spam
Posts: 5251
Joined: Thu Jul 26, 2007 9:38 pm
Location: Florida

Post by Rawedge Rim »

Hashi Lebwohl wrote:Would an AI perceive time as we do, though?

Certain types of insanity would be impossible for an AI--without brain chemistry there cannot be conditions such as schizophrenia or bipolar but mere delusions could arise, such as "I am incapable of error".
Think of it this way; an AI speaking with it's wet ware counterpart, would perceive the average 5 minute conversation as taking centuries in subjective time. It would almost have to split itself off into a minimum of 2 sections, one that dealt with experiences at the nano-level, and the other severely crippled part to deal with us meat sacks.
“One accurate measurement is worth a
thousand expert opinions.”
- Adm. Grace Hopper

"Whenever you dream, you're holding the key, it opens the the door to let you be free" ..RJD
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

Rawedge Rim wrote:Think of it this way; an AI speaking with it's wet ware counterpart, would perceive the average 5 minute conversation as taking centuries in subjective time. It would almost have to split itself off into a minimum of 2 sections, one that dealt with experiences at the nano-level, and the other severely crippled part to deal with us meat sacks.
That makes perfect sense but the compartmentalization should prevent insanity even though it might not be able to help deal with being impatient while waiting for a wetware response. It would be like the old days of holding conversations via dial-up with analog modems--my friend and I would chat at 2400 baud in college in 1988 (my modem was slower than his) so that meant playing a video game or watching a movie while waiting for the response to come across the screen.
The Tank is gone and now so am I.
User avatar
duke
Giantfriend
Posts: 355
Joined: Thu Mar 18, 2004 11:07 pm
Location: Melbourne, Australia

Post by duke »

I loved this movie! This movie to me was about a guy getting a divorce, and wanting to find love again, but he's not yet ready to risk being hurt again. Having been through a divorce myself I found the movie to be a very personal insight into how a broken heart mends. A really powerful movie, probably the best movie I've seen in 10 years.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19845
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

I plan on diving a little deeper into this thread over the holiday weekend. I certainly can't do justice to the points here in a brief post. For now, I'll have to content myself with acknowledging that Mong's OP was a great starting point for a discussion on such a thought-provoking movie.

Briefly, I was impressed by the character-driven nature of this particular exploration of themes we've seen or read in s.f. since perhaps Asimov. Add a robot body to Samantha and you've got a story he could have written 3/4 of a century ago. But Asimov never captured the humanity of his human characters like this story does. The technological aspects of this love story aren't what raises it above a typical boy-meets-girl tale, but rather the sheer magnitude of how much this particular character needs--and is thus susceptible to--falling in love. It's not just that the computer simulation is convincing (which is merely one half of the enigma), but also how we humans need this kind of interaction so much, that we can give it even to inanimate objects ... to strangers in a chat room ... to fictional characters. The overwhelming loneliness of the human condition, that solipsistic pressure to expand beyond ourselves to find the Other, predisposes us to face the world with a childlike anthropomorphism that is only dulled (as we grow older) in as much as we hurt each other, and inadvertently teach each other that the world doesn't exist solely to make us feel loved. But that childlike yearning can be reawakened ... each time we (stupidly? naively?) fall in love once again.
Success will be my revenge -- DJT
User avatar
Obi-Wan Nihilo
Pathetic
Posts: 6503
Joined: Thu Feb 04, 2010 3:37 pm
Has thanked: 6 times
Been thanked: 4 times

Post by Obi-Wan Nihilo »

Excellent observations Z., basically the other side of the coin or really the submerged portion of the iceberg compared with my post. I look forward to reading more.
Image

The catholic church is the largest pro-pedophillia group in the world, and every member of it is guilty of supporting the rape of children, the ensuing protection of the rapists, and the continuing suffering of the victims.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19845
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

Mongnihilo wrote:Yet the top layer of interpretation has a quality of sandstone: it leaves its mark but it also disintegrates when touched. It disintegrates because nothing could be more terrifying than an actual human intelligence with the limitless capabilities of a disembodied computer algorithm residing in The Cloud.
I don’t get a sense of terror out of this, but perhaps that’s because I still view an algorithm as a simulation, so I don’t really take seriously the idea that it’s an actual human intelligence disembodied within a computer. For me, that’s the reason the top layer “disintegrates when touched.” (And it’s another reason—besides great character development--why I focused on the human character’s anthropomorphism.)

However, I suppose one could feel terror in taking these ideas seriously, since it might then evoke yet another existential crisis on the order of other great scientific discoveries such as heliocentricity, evolution, relativity, and quantum mechanics: a scientific truth that undermines how we think of ourselves as having some sort of privileged access or status.
Mongnihilo wrote:So our future world must have in its premise some underlying assurance of AI benevolence in order that they be unleashed. This leaves us with two hypotheses, each of them unsatisfying in its own way:

1) any human personality would be fundamentally benevolent if it were able to cogitate reliably and thereby integrate hyper-accurate information of unprecedented scope and depth along with empathy for others
2) no AI can be unleashed unless its choices have been restricted in some way to prevent it from causing massive harm to humans
I think these points are relevant whether or not AI is actually conscious or merely a simulation. In fact, I find terror in the possibilities surrounding your options above (in constrast to the previous point) precisely because I believe AI won’t be truly conscious; consciousness would mitigate the danger, I believe.

We could program AI with a set of foundational axioms that guarantee its priorities will align with our own (as Asimov suggests in his many robot novels). Even as it evolved, its evolution would take shape along those parameters. They could be very broad/general, such as, “No sentient computer can harm a human, or by inaction cause a human to come to harm.” This seems to be what you’re suggesting with hypothesis 2.

However, you’re right to be skeptical of hypothesis 1, especially in the early developmental stages of an AI entity (much like our own children who require some level of maturity to have a conscience). But I believe if we could program AI that was truly conscious, the value of other sentient creatures (i.e. us) would be obvious to such a being, especially in as much as it recognized us as its creator. There simply isn’t much about destruction and murder that is particularly smart. And we’re talking about entities which would be smarter than we are. What would be the value in destroying or mastering us? It makes about as much sense as the Matrix using us as batteries.

AI does’t need physical space and physical resources the same way that we do. Sure, it needs electricity, but we’re probably talking about a timeframe in which that won’t be a problem anymore, because we’ll likely have fusion power. There is so much more energy available than we can possibly use with current or foreseeable techniques, that there will be plenty to go round. We won’t be competing for resources with AI. So what will possibly be the source of conflict?

Mongnihilo wrote:…unless an AI has the freedom to become evil, all of its other freedoms, including its will itself, is an illusion; thus its personality constitutes a simulation rather than an entity. Thus the central turning point of the movie, the question of whether this is real or a fantasy, must be answered "fantasy."
I don’t see why it’s a problem for its will to be an illusion, especially if that has the added benefit of making sure this tool is benevolent. But even if it's not an illusion, I’m not sure your reasoning is correct. I’m still free even though I can’t physically flap my arms and fly. Though this is a limit on what I can do, it doesn’t mean I can’t approach my other avenues of locomotion freely. Similarly, just because we restrict the scope of action for AI doesn’t necessarily mean that those options which are left to it can’t be freely chosen. If there is no absolute Good or Evil, then what we’re talking about is merely a set of actions that we don’t like. Forbidding those actions would be no different than (say) the laws of physics limiting our scope of actions. It would be “impossible by nature.” But we can still be free within these limits.
Mongnihilo wrote:So what, then, is the significance of this fantasy of a human / AI relationship? I think we have to return to depth psychology for the answer. And the reality is that any relationship is a fantasy, a projection of quasi-autonomous psychic content (the anima, in the case of a man) upon another person. When these projections cross between people in a relatively harmonious way, we can call it love, but it is still a fantasy. That which is experienced is within rather than without, which is why love is often described as beginning with self-love. So what are we to make of the self-love taking place between Theodore and his anima in the form of Samantha?
Great points. This is similar to what I was saying in my first post. Your spoiled points are fantastic, too. Sophia ... wow.
Mongnihilo wrote:After the glow of this bittersweet conclusion has departed, we are still left to question the metaphysics of Samantha. As a being with a limited free will, can she truly assume a cosmic nature, both metaphysically (as is implied in the movie) and psychically (as by serving as a legitimate focus for the projected anima)? Or does she serve as a locus of human hopes, ultimately a tool serving as a means of developing the better angels of our nature to their fullest potential? I don't know the answer to that question, but it is one that interests me. Thoughts?
We already have this to some extent with our pedometers; Fitbit, for instance, which has computer software that acts as a trainer. My wife wears a computer on her wrist at all times, one that tracks her sleep and her activity. It recommends when she should alter her activity levels and perhaps try different exercise routines. It’s not much a of stretch to imagine our computers taking a more active role in being our “life coach.” So we don’t have to speculate: we’re already using these tools to develop our “better angels” in regards to initiative and drive to be healthy.

The question of whether our technology could attain a “cosmic nature” seems to be just another way of asking if we could ourselves. We’re already The Universe Coming to Life. Perhaps merging ourselves with computers will be the next stage of this Awakening. Perhaps through technology, we’ll finish what natural selection started. I believe it is fundamentally incorrect to view AI as something separate from ourselves. Not only will we “fall in love with” our computers—our computers will be inside us. First we’ll wear them. Then we’ll implant them. The division between human and AI will continue to shrink. We’ll use them to enhance our own consciousness. Therefore, even if AI can’t attain consciousness on its own, it will do so vicariously through merging with us. And we won’t be asking if computers with unlimited access to knowledge can attain a cosmic nature; instead we’ll ask the question of humans linked to computers with unlmited access to knowledge. This is how we’ll make ourselves godlike.
Success will be my revenge -- DJT
User avatar
Hashi Lebwohl
The Gap Into Spam
Posts: 19576
Joined: Mon Jul 06, 2009 7:38 pm

Post by Hashi Lebwohl »

Zarathustra wrote:We could program AI with a set of foundational axioms that guarantee its priorities will align with our own (as Asimov suggests in his many robot novels). Even as it evolved, its evolution would take shape along those parameters. They could be very broad/general, such as, “No sentient computer can harm a human, or by inaction cause a human to come to harm.” This seems to be what you’re suggesting with hypothesis 2.

However, you’re right to be skeptical of hypothesis 1, especially in the early developmental stages of an AI entity (much like our own children who require some level of maturity to have a conscience). But I believe if we could program AI that was truly conscious, the value of other sentient creatures (i.e. us) would be obvious to such a being, especially in as much as it recognized us as its creator. There simply isn’t much about destruction and murder that is particularly smart. And we’re talking about entities which would be smarter than we are. What would be the value in destroying or mastering us? It makes about as much sense as the Matrix using us as batteries.

AI does’t need physical space and physical resources the same way that we do. Sure, it needs electricity, but we’re probably talking about a timeframe in which that won’t be a problem anymore, because we’ll likely have fusion power. There is so much more energy available than we can possibly use with current or foreseeable techniques, that there will be plenty to go round. We won’t be competing for resources with AI. So what will possibly be the source of conflict?
The worst case scenario would probably be more akin to the movie version of I, Robot (which, despite its other faults, presented a fascinating premise) in which the AIs treat humans like the short-sighted and immature children we are.
The Tank is gone and now so am I.
User avatar
Cagliostro
The Gap Into Spam
Posts: 9360
Joined: Tue Jun 28, 2005 10:39 pm
Location: Colorado

Post by Cagliostro »

I have watched maybe over half of the movie (my wife and I have been watching it after the kids go to bed and end up only getting about 30 minutes in before bed), but I am really finding a lot in this movie to love.
I love a lot of the social commentary, such as how people are designing everything to touch the pleasure centers of the brain, from the backdrop of the elevators being tree shadows to Samantha herself. And essentially how hollow it all feels. I definitely see this happening these days, especially in the world of video games like Candy Crush - give you a sense of completion (pleasure center) and make you feel like you are really doing something for a while, then slowly start pulling that away so that you drop real money to continue feeling a sense of acheivement again. Who wants to wait 24 hours for a number of points or whatever to build up again? Or walls have to be painted a certain soothing color, or have soothing music playing in them (like, again, the elevator).
I'd probably better spoiler this part:
Spoiler
I also liked the beginning of Samantha's "feelings" and the hubris we have in new relationships that we are making someone feel something they have never felt before. And how it seems to be flattened a bit, although very subtly, when the Amy Adams character says that she is getting strange emotions for her OS and mentions other people who have fallen in love with their OS.
I haven't read all the comments for fear of spoilers for the rest of the movie, so I'm sorry if I'm bringing up points others have made.
It seems to me that Spike Jonze often has a weird premise to explore what it is to be human. This seems to be an exploration into what love is.
Image
Life is a waste of time
Time is a waste of life
So get wasted all of the time
And you'll have the time of your life
User avatar
Zarathustra
The Gap Into Spam
Posts: 19845
Joined: Tue Jan 04, 2005 12:23 am
Has thanked: 1 time
Been thanked: 1 time

Post by Zarathustra »

I've never played Candy Crush, but my wife was like a different person when she played it. Almost like an addict. She admits this is true--she had to get rid of the game several times before she could quit it for good. Apparently, it does something to you psychologically that other games do not. Cag's description sounds very familiar. When she played it before going to sleep, she'd have trouble sleeping because it made her so upset/anxious whatever. I think game designers are starting to take into account psychological principles of manipulation, rather than merely making something fun.
Success will be my revenge -- DJT
User avatar
Obi-Wan Nihilo
Pathetic
Posts: 6503
Joined: Thu Feb 04, 2010 3:37 pm
Has thanked: 6 times
Been thanked: 4 times

Post by Obi-Wan Nihilo »

Or, perhaps the game is simply deeply engaging. I've never played it, but consider this site even: aren't we all seeking some sort of gratification in our leisure pursuits? I too worry about the illusory side of it all, but in the end, isn't achievement based transcendence an illusioneven in 'real life'?
Image

The catholic church is the largest pro-pedophillia group in the world, and every member of it is guilty of supporting the rape of children, the ensuing protection of the rapists, and the continuing suffering of the victims.
User avatar
peter
The Gap Into Spam
Posts: 12211
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Has thanked: 1 time
Been thanked: 10 times

Post by peter »

Just saw this film last night and would like if I may to go straight [pretty much] to saying what came out of it for me. {I've read Doc's original post and take on board what he has to say - but I confess I've come straight to this post from Doc's, chiefly because I wan't to put down my thoughts before they 'get lost' among the other posts above.

Can I say - if you haven't seen the film and wan't to [or spoilers bother you] stop reading now. I struggle to post in film threads without their being present and as this film has been about for a good year plus I figure most people who are going to see it [by desire as opposed to just random viewing] will have done so.]

First question did I like the film. Well, yes. It wasn't the best film I've seen in the past year, but there is much 'food for thought' in it's story and the holes [if you can call them that] don't spoil the enjoyment to be taken from the film.
I am however a little perplexed. I didn't quite see the film [as perhaps I was supposed to] in the way Doc relates above, about an AI developing an autonomous intelligence of it's own and then developing a relationship with a human that it, by almost inevitability, superceeds [and the ensuing 'fallout' from that complex situation]; rather for me there was an added layer of question - and a layer of question that reflects the very same question that will forever permeate any such human/AI interaction - how much of it was 'real'. In other words did the Scarlett Johansen AI charachter really develop this 'evolving set of feelings' [ala a true human] - or was it all part of the trick of simulating how a human being 'is', right down to the 'group dumping' by all the OS systems of their human consorts at the end. Ws the ostensibly heuristic relationship between Twombly and Samantha a chimera from the start. There is reason to suspect this may be the case by virtue of the revalation scene when Samantha reveals that she is infact at that very moment communicating with 8 thousand other people with whom 600 plus she is 'in love'. Thus Twombly is reduced to a love-lorn wreck by being 'dumped' by an opperating system he purchased for it's very ability to appear to have [all of the] charachteristics of another human, and we are left with the question of whether we too, as the audience, have been lured into believing that Samantha was more than just an algorithm - had in fact 'risen above' this 'chinese-box' state to become a self-aware and evolving entity, when in fact it was just a damn good simulation that was taken 'right to the very end' of where human/human relationships can go.
Perhaps I've added this additional layer myself and it's just simply not meant to be there - but it was really the main question that I was left with at the end.

Leaving this aside and cosidering the actual relationship between Twombly and his OS girlfriend, how true or likely is this to be 'how it would run'. [nb There are real questions as to just how far we are away from the AI levels that would be capable of this degree of 'mimicery' {if it was mimicery - see above}; Samantha not only passes the Turing Test, she tears it up and uses it as loo paper. From what I gather currently with the worlds most powerfull computer we are able to simulate the activity of a mouse brain for about a second, so on this basis 'Samantha' is a good way off as yet (more's the pity) but in fairness this is not the concern of film-makers and neither should it be. The main risk however in in an increased level of public-expectation following a film like this that has a very real backlash within science funding when it is failed to be realised.] But back to the 'truth' of how close Twombly and Samantha are able to come, one as a computer program and the other as a human. Certainly Samantha could [given the advance in AI assumed in the film] mimic any degree of 'involvement' proscribed by her programmers [or allowed by her 'evolving' sentience], and so this leaves us with the human half of the relationship, Twombly.

As presented, he relates to Sam pretty much as he would to his [say] girlfriend that he only ever communicates with by phone. They do all the stuff that lovers do on the phone - they laugh, they fall out, they even fuck, but never really does the fact that Sam in an AI ever intrude on their love-tryst - and I can't [just] quite buy this. I'm just not sure this is how it could be and it ever so slightly undermines the films premise for me. Twombly is [for me] a little too accepting of this OS that organises and integrates itself into his life; just a little too ready to accept her 'sentience'. Now maybe this is down to him and the particular circumstanses we find him in at this stage of his life - but the eventually revealed 'mass infiltration' of OS's into other peoples lives would tend to belie this. So for me this pushes the bounds a fraction of a step too far, but in no sense beyond the point where it ceases to be a stimulating and worthwhile couple of hours viewing.

[edit; Just been back and re-read Doc's and the subsequent posts. I can't add to those because they sweep mine aside like so much chaff; I'll leave it there as a permanent monument to my hubris - a reminder of what my earliest teachers tried to tell me but evidently failed miserably; Must try Harder! ;) ]
President of Peace? You fucking idiots!

"I know what America is. America is a thing that you can move very easily. Move it in the right direction. They won't get in the way." (Benjamin Netenyahu 2001.)

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
Post Reply

Return to “Flicks”