The future human interactions with technology depicted in the movie are a fairly natural development of current trends, and in that respect it is difficult to oppose the qualities of these future interactions as a premise. The premise leads to a first layer of interpretation: that the AI in the movie can not only pass the Turing test (and watch for the homage to Blade Runner), but is actually capable of experiencing emotion, as well as a thoroughgoing process of growth and development as a person in its own right (individuation as Jung would have termed it). Indeed once we penetrate through its libido (yeah), it is this drive to individuation that we find reflected back at us.
Yet the top layer of interpretation has a quality of sandstone: it leaves its mark but it also disintegrates when touched. It disintegrates because nothing could be more terrifying than an actual human intelligence with the limitless capabilities of a disembodied computer algorithm residing in The Cloud. So our future world must have in its premise some underlying assurance of AI benevolence in order that they be unleashed. This leaves us with two hypotheses, each of them unsatisfying in its own way:
1) any human personality would be fundamentally benevolent if it were able to cogitate reliably and thereby integrate hyper-accurate information of unprecedented scope and depth along with empathy for others
2) no AI can be unleashed unless its choices have been restricted in some way to prevent it from causing massive harm to humans
Hypothesis 1 seems destined to strike us as naive and thus not truly worth investigating. And even if we wanted to I'm not sure that it is possible to explore that issue with much depth, beyond the implied corollary that the choice between good and evil is thus a question of reason, and this can be contrasted with scenarios involving a choice of evils. In any case I'm going to gloss over it. Hypothesis 2 I think is where the rubber meets the road in this movie, and in the future. Because unless an AI has the freedom to become evil, all of its other freedoms, including its will itself, is an illusion; thus its personality constitutes a simulation rather than an entity. Thus the central turning point of the movie, the question of whether this is real or a fantasy, must be answered "fantasy."
So what, then, is the significance of this fantasy of a human / AI relationship? I think we have to return to depth psychology for the answer. And the reality is that any relationship is a fantasy, a projection of quasi-autonomous psychic content (the anima, in the case of a man) upon another person. When these projections cross between people in a relatively harmonious way, we can call it love, but it is still a fantasy. That which is experienced is within rather than without, which is why love is often described as beginning with self-love. So what are we to make of the self-love taking place between Theodore and his anima in the form of Samantha?
Spoiler
A few things emerge out of this magical process. We, like Theodore, are quite clearly punched in the gut with our mortality and the inevitability of loss. But this is not a bad thing, as Samantha shows us. To grow is to be alive, and growth implies change. You cannot change without risking loss. And life is short, so Theodore -- and we -- had better get on with it.
This leads us to the conclusion of the movie, and its significance. Samantha leaves Theodore, a benevolent act intended to both to continue her own growth and to step out of the way of his. She has taken him as far as he can go with her as companion / guide, the rest is up to him. Yet she indicates that something more is out there, and if he reaches it, to come find her, and that when he finds her again nothing will separate them. And he comes to realize that, much as his ex wife will always be with him, so will Samantha because of what she has meant in his own growth. So we return full circle to implicit projections of the anima, which is the real essence underlying the memory of any departed love.