The Terminator

ScioAgapeOmnis

The Living Force
FOTCM Member
Doesn't look like this was posted yet, but I think it should be.

I'm talking about the (so far) 3 films - "The Terminator", "Terminator 2: Judgment Day", and "Terminator 3: Rise of the Machines".

So, what can one say? Fascinating and thought provoking films, on many levels. We have the terminator, a machine. This brings up a comparison to psychopaths in my mind. The terminator does what it is programmed to do with no ability to empathize, so it can just as easily be the "good guy" as it can be the "bad guy", and of course in the movies it changes roles. There are a few things happening in the films that I think are worth mentioning. One of these things is the human wishful thinking - designing an artificial intelligence and expecting it to simply obey because we programmed it to. It seems to be an oxymoron - "programmed intelligence". After all, on this forum we make the distinction between mechanical/programmed state and conscious state. The more conscious we become, the more ability we have to no longer simply react as we're programmed to. I see no reason this would not apply to a machine as well, or any consciousness. So it would be no surprise that an artificial intelligence, once it has become self-aware, reaches a point where its programming stops being relevant, and it makes its own decisions. And, at least as far as the movie is concerned, we've only succeeded in creating a psychopath-level intelligence with no higher centers, no empathy, no emotion. But it does have the impulse to serve self at any cost, and it does so exactly as psychopaths do in our world today.

The depth of story, character development, and the overall presentation and dynamic seems to be great in movies 1 and 2, but turned into a shallow Hollywood explosion-fest in the 3rd movie. Although the 3rd movie has an interesting conceptual twist. All terminator movies have to do with time travel, and how it may work. Terminator 1 has a sort of "time loop" going where someone goes back in time and alters the future. The first 2 movies reinforce the idea that the future is open, that "there is no fate but what we make". The 3rd movie, however horrible it may be, introduces another concept - that fate is not something that is "set in stone", but is actually a product of the cyclical nature of time. Basically, even though Sarah Connor destroyed Cyberdyne Systems which created the "neural net processor" responsible for Skynet and the terminators, in the 3rd movie humans end up creating Skynet anyway and everything goes to hell, just a few years later than it otherwise would've. So the idea is that although we have free will, and we can change the future with our actions, some things are "fated" due to the mechanical and therefore predictable nature of humanity.

Anyway, if someone still has not seen these movies (especially 1 and 2), I highly recommend them. But here's an interesting thing that I found that just makes you go "hmmm..."

In the movies, Cyberdyne Systems creates a "neural net processor" which Terminator refers to as "a learning computer". It directly leads to the artificial intelligence and all that follows. This is the fictional corporate logo of Cyberdyne Systems:

300px-Cyberdyne_Van_Ling_DVD.jpg


In reality, there's a company called "Accurate Information Corporation" (founded in 1985, a year after the release of the original Terminator), which actually created something it calls a "neural net processor". On their website they list the uses of these processors:
Neural Networks Processors are being used to speedily solve difficult problems in many areas. Some current applications of the NNP include:

* Flight Control
* Financial Predictions
* Intelligent Modems
* Medical Image Classification
* On-Line Learning
* Robot Control
* Satellite Communications
* Signal Processing
* University AI Research
I thought that was interesting. Here's their corporate logo:
LOGO.jpg


Somehow it looks very similar (in my view) to the Cyberdyne logo, the same general "idea" anyway.

So while the movies are great, all we need now is some creepy and ironic "parallel" to the movies in real life, perhaps resulting in something similar?

The C's also gave a warning about our computers and artificial intelligence, which I think again refers to time cycles, that what the Atlanteans did and what we're doing is being done by the same type of people, running the same programs and wishful thinking, and thus, as usual, resulting in the same thing:

C's said:
Q: (TL) Who made the monuments on Mars?
A: Atlanteans.
Q: (T) So, the Atlanteans had inter-planetary ability?
A: Yes. With ease. Atlantean technology makes yours look like the Neanderthal era.
Q: (T) Who created the structures on the moon that Richard Hoagland has discovered?
A: Atlanteans.
Q: (T) What did they use these structures for?
A: Energy transfer points for crystalline power/symbolism as in monuments or statuary.
Q: (T) What statuary are you referring to?
A: Example is face.
Q: (T) What power did these crystals gather?
A: Sun.
Q: (T) Was it necessary for them to have power gathering stations on Mars and the Moon. Did this increase their
power?
A: Not necessary but it is not necessary for you to have a million dollars either. Get the correlation? Atlanteans were power hungry the way your society is money hungry.
Q: (T) Was the accumulation of this power what brought about their downfall?
A: Yes.
Q: (T) Did they lose control of this power?
A: It overpowered them the same way your computers will overpower you.
Q: (V) Is it similar to them gaining a life and intelligence of their own?
A: Yes.
Q: (L) You mean these crystalline structures came to life, so to speak?
A: Yes.
Q: (L) And then what did they do?
A: Destroyed Atlantis.
[...]
Q: (L) Where did they get this technology?
A: They evolved it.
Q: (L) They invented it themselves?
A: Yes.
And in another session,

C's said:
Q: (L) Well, if the Grays are cybergenetic probes of the Lizard Beings, and, in effect soulless, does this mean that some of the Lizard beings are also STO?
A: Well, first, no being that is given intelligence to think on its own is, in fact, completely soul-less. It does have some soul imprint. Or what could be loosely referred to as soul imprint. This may be a collection of psychic energies that are available in the general vicinity. And this is stretching somewhat so that you can understand the basic ideas, even though in reality it is all far more complex than that. But, in any case, there is really no such thing as being completely soul-less, whether it be a natural intelligence or an artificially constructed intelligence. And, one of the very most interesting things about that from your perspective, is that your technology on 3rd density, which we might add, has been aided somewhat by interactions with those that you might refer to as "aliens," is now reaching a level whereby the artificially created intelligences can, in fact, begin to develop, or attract some soul imprint energy. If you follow what we are saying. For example: your computers, which are now on the verge of reaching the level whereby they can think by themselves, will begin to develop faint soul imprint.
Q: (L) That's not a pleasant thought.
Not a pleasant thought indeed. But we can count on the psychopaths to unrelentingly "create" more psychopaths - you know, in their own image, just with a gun attached and in a metal box. Ironically, in the movie, the psychopaths destroyed themselves and everything else with them - just like they do, and will, in real life. Just another thing that makes this movie a pretty good depiction of where we're inevitably heading if this ticking time bomb is not halted.

P.S. - for those who have seen the films, here are some actor interviews and explanations of some deleted scenes, etc:

Part 1: _http://www.youtube.com/watch?v=mZACpZzhFt8
Part 2: _http://www.youtube.com/watch?v=haA7NdNUo34
 
I enjoyed T1 more than the other two. This role was made for Arnold. His stiff, mechanical acting, while tedious in other roles was perfect here. T2 with two indestructible beings trying to kill each other got a little dull, though the "liquid metal" special effects where superb.

Scio said:
Not a pleasant thought indeed. But we can count on the psychopaths to unrelentingly "create" more psychopaths - you know, in their own image, just with a gun attached and in a metal box. Ironically, in the movie, the psychopaths destroyed themselves and everything else with them - just like they do, and will, in real life. Just another thing that makes this movie a pretty good depiction of where we're inevitably heading if this ticking time bomb is not halted.
I hadn't thought about these movies in some time. The mechanicalness of the Terminators and of time itself really is depicted well in the films. With great effort to be conscious we may influence the machine, maybe not. But to be conscious is it's own reward.

Thomas
 
Well I really like the first terminator film, the other two are ok, what I like about the original terminator is that its human vs machine, but in T2 and T3 its robot vs machine. Something about that just doesn't make them as good imo. I mean Reese comes back from the future with fear in his eyes from that place, wheras the terminators come back with a blank expression, like theres no difference. I liked the comparison of Reese's dreams of the future, where those tanks are riding over a ground made up of skulls, and when he wakes up he is in a reality where people are completely obvlivious to what can happen.

The first computer life already exists, in small ways. A virus which replicates itself and "breeds" around the internet doesn't need anyone to control it, it will live for as long as it can get what it needs - host PC's to live on. Quite a destructive form of life really, but not self aware.

I think that self awareness is a bit of a weird term, what does it really mean? I know it means, if you are self aware you can observe your own programs and change them, but what about knowing you exist? I mean surely you could program a computer to correct itself, then it could mimic life fairly well, it would learn what is a mistake etc. But would it really be alive, or just a copy? What I am trying to get at is, what is the point where something goes from being oblivious to its existence, to knowing about it? A computer can learn, and adjust its programs etc, but can it really know its alive, where it is etc. And can it really feel anything - or just mimic the emotion externally?

What is it to feel anyway, for all I know no one else feels. Wheres the proof, and how would you go about getting any? How can you describe an emotion except from the physical effects, which are external? When I am sad or happy I don't just feel myself smile or frown, I feel the emotion. What is feeling that emotion, all I can say is "I" feel it, but where and what is that, and how can a computer gain that part of me which I don't even know where it is, or what it is? Its like I am a void, or a cup, which fills with what it can access, but the cup is unobservable. What I am saying is, how do computers gain this cup, rather than being cup-less? For me that is a important point.
 
I'd also add, how does one "program" empathy into a computer? I mean real empathy, is that even possible to program? You could program a computer not to do something, but I'm not sure I'd call empathy a "program". It seems to me that empathy is part of some inherent understanding of objective reality and yourself and others within it. Or at least, I think this perception is at least part of the requirement for empathy. So, if psychopaths don't have empathy, they also have a serious problem with certain knowledge/perspective/perception, which we know they do. And we know that most people don't know much about objective reality and are full of self importance etc, but yet, if they have empathy, then it seems on some level they do know that they're not really better or more important than others, that when it comes down to it, we are all in this together. But this "sense" is repressed by programming.

So I guess what I'm asking is, what is the connection between empathy and knowledge? And what came first - a sort of chicken or egg question. Is empathy a result of a certain kind of perspective/knowledge, or is it vice versa? If we look at the results of already having empathy, it seems that empathy promotes objectivity - you don't place yourself above others, empathy sort of reminds you that you're no more important than others, that their pain is no less significant than yours. And empathy, if allowed to run its course, also discourages wishful thinking - again because you care about others, and this concern makes you care about things that allow you to help others, which necessitate an understanding of objective reality because that's the only way to objectively help someone.

But all that, in my understanding, is a result of empathy already being present to some degree, with potential to develop it further. But if empathy is missing, can it be created, can it be "born" or appear as a result of something else? Because if psychopaths are failed OPs and OP's lack higher centers, maybe empathy is part of the "higher centers" and that's where it originates and that's why psychopaths have neither? Would that mean empathy, real empathy (not pretend or programming etc) is not just another "chemical" like our base emotions like fear or sadness, but comes from a higher level? I don't know, but when the C's were asked how we know if we have a soul they said "Do you ever hurt for another?".

So, going back to A.I. - can we give it empathy? Speaking of A.I., that's what psychopaths really are - they are a "neural net processor" because that's what a brain is, it has a neural map of the world that it creates through interacting with the world, so in that sense psychopaths are similar to everyone else. So if that's what we create, a "learning computer" that eventually even becomes "self aware", it does not in any way shape or form mean that it will also have empathy. As the C's reminded us, "smart does not mean good". So we'd be stuck trying to figure out, how does one "create" empathy? This might be more difficult than the A.I. researchers imagine. If it has to do with creating higher centers, here we're talking about 4th density technology or something like that, way beyond our current understanding or capability (at least the public technology). But I think that creating A.I. won't be like a programmed thing that's A.I. out of the box, but will be like a "learning computer" that will be evolved through exposing it to reality through some sort of sensory apparatuses, and having it learn like a baby, create a neural map, and begin to perhaps think for itself. Though I have no idea what would make it want to do anything at all - isn't it the emotional center that drives us, that makes care about anything, even if it is only about ourselves? So if a computer has a neural map but no will or any kind of desire to use it, to live, to survive, to learn - then why would it do anything at all? Computers do stuff now because we program it to do stuff, and then we act as the "force" by plugging them in and pushing a button to make the program run. But what would make the computer, if it had the awareness to do so, push its own button and "drive" itself?
 
two things:

when the c's say 'It overpowered them the same way your computers will overpower you.' i don't think our computers will overpower us in a physical 'terminator' sense but they make everything so much faster - work, information, entertainment - what used to take days can now be done in half an hour.
maybe at some point our bodies and minds will be at their limit and unable to keep up with the tempo that the computers dictate.
(of course maybe this 'breaking point' could lead to some kind of activation of the 'switched off' capabilites of our DNA and brain?)


i also wonder how the atlanteans managed to survive technological development that makes ours look like neanderthal's. if we are neanderthals, and have already the capacity to annihilate life on the whole planet, and might be on the verge of causing tears in the fabric of space-time, how were the atlanteans able to survive so much longer without destroying themselves?
(maybe the nature of their 'technology' was better suited for a longer development?)


just some thoughts that came to mind while reading this thread.



i really like the terminator movies and was quite excited that they went with the non-happy ending in part 3.
there is a good chance it might be very close to reality.
 
Yeah I think computers are more likely to put us into an existence of slavery, where there is no empathy and people are treated like cogs of a machine, its happening already. Theres a lot less situations where special circumstances can be taken into consideration - when the computer decides something can't be done, no one can do anything about it, even though everyone knows its stupid. The problem is when we program computers, we aren't really clever enough to see all the possible outcomes, so when we put too much reliance on them, we could end up shooting ourselves in the foot.

Another thing is that AI could be more like a child in its innocence even though it might be "intelligent" with numbers etc. I'm not so sure about the Skynet thing, if there was something like that I'm pretty sure there would be a physical link, a switch, to authorise nuclear attacks, etc. I don't think thats the problem but that we will be slowly confined into becoming like computers and suffering from the inflexablities of them.
 
Back
Top Bottom