Who knew I’d have an appropriate use for this icon I made?

(remember, normal people learn in order to survive – geeks survive in order to have time to learn!ds)

I just finished watching I, Robot, and I had what I think is an interesting observation.

But first, who out there knew that Alan Tudyk played Sonny the robot? I didn’t know that, and it makes it kind of hilariously strange that I happened to buy Serenity at the same time as I rented I, Robot.

As the movie ended, I started thinking about Asimov, his intentions for his short stories, and the perspective that created the three laws and, of course, the flaw of reinterpreting the 3 laws to allow what happens to happen.

If you don’t know what the three laws of robotics are for Asimov’s world, they are these:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

One may say that it is the variable definition of “come to harm” that causes the trouble you see in the movie. However, actual programming isn’t in the english language. While these may work as a basic way of explaining how they were programmed, in order to “hardwire” in the concept of harm, you must specify exactly WHAT harm is.

Let me put it another way: Logic must provide explicitly the varieties of harm possible. You must define the objects (humans), the actions, and the catalysts for those actions. In order to determine that harming ourselves is a cultural vs immediate phenomenon, that concept (of the greater harm we cause ourselves) must be specifically accounted for in the programming.

Because we speak and think in a language that is ambiguous and full of unintended meanings, we accept that these types of misunderstandings can happen. If these rules were -spoken- to each robot, rather than part of the initial hardware set, they would THEN have to interpret our symbols.

You could say it’s bit-twiddling (hah!) or splitting hairs, but it’s a valid point that I think comes from the fact that thst story was written by an author, not a programmer.

I think it’s still a good story, a good point he makes, and a great science-fiction experience, in general. The question he poses is still valid and chilling, when taken on the whole. Do we have the intelligence to properly define our safety in a world where our intelligence will one day be dwarfed by our creations? Could an intelligence that can outsmart us out-think our will in order to follow its own? Could we convince such an intelligence to be loyal to us, as a whole, the way Sonny is loyal (through, essentially, love)?

~ by Skennedy on May 5, 2007.

17 Responses to “Who knew I’d have an appropriate use for this icon I made?”

  1. “But first, who out there knew that Alan Tudyk played Sonny the robot? I didn’t know that, and it makes it kind of hilariously strange that I happened to buy Serenity at the same time as I rented I, Robot.”

    i did not know that. interesting :)

  2. According to the books, the robots have Positronic brains that learn and adapt. They can assess whether a situation “harms” a human, and anticipate scenarios that are likely to occur in the future. I don’t know how well that came across in the movie (I kept catching bits of it on tv in the last few weeks)

    They have also been running “The Bicentenial Man” starring Robin Williams. A very very different take on the same source material.

    • I gathered that they have positronic brains, and that they adapt – however, they also pointed out that the three laws were etched in before -any- programming had been added. The question is, would someone program these 3 laws with intentional ambiguity? Because they would -have- to intentionally make them ambiguous, whereas with the english language, ambiguity is built in.

      • As I recall from the books, they need the ambiguity, for situations such as: a man with a gun is going to kill an innocent child. The act of protecting the child has to override the urge not to harm the gunman. So they can harm one human to save the life of another. Without that ambiguity, you get one of those classic B-movie robot feedback loops with the popping and smoke and the “DOES NOT COMPUTE”-n-hey glavin……

        • I guess my point is that, unless you are literally “growing” a brain in which it goes through every single computation all by itself in order to ‘learn’, every ambiguity is a specific programmed ambiguity. There certainly would be room for such logical decisions as that, but that lattitude would be specified somewhere in the code. It’s not just some sort of “you figure it out” system, but a complex logic tree.

          • Neither of the movies based on I, Robot have been very true to the book, AFAIK.

            Asimov was not a programmer. In fact, he didn’t catch on to miniaturization until -after- it was actively taking place.

            Being the Asimov fanboy that I am (he coined the word “robotics” as “the study of the science of robots), I have to point something out. It is very unlikely that something as complex as a positronic brain would be programmed in something that had to boil down to binary. Explicit logic in programming is not assumed by the author here. Consider the dalliance with “fuzzy logic” programming that happened in the late 80’s, and the Mage concept of Trinary Computing with Yes, No, and Maybe as options. This kind of technology would likely be needed before any kind of “thinking” robot could be made.

            Learning doesn’t mean recording and repeating, even for robotics. It means grasping, grokking, and understanding a situation. I’m not sure explicit logic has the ability to state that.

            Also, don’t forget the fiction part of science fiction. :)

          • *smiles* And this is exactly why I put the last paragraph in my post – A futurist, for example, isn’t necessarily someone who is actually involved in the details of creating those futures – though of course, their ideas may inspire others to do so.

            Quantum computing has, on an extremely limited scale, actually been accomplished. I’ve yet to see a useful application of this that doesn’t do the same thing a binary system does, but faster depending on how many options it must necessarily go through. I can see how a positronic brain might be a quantum computer, but I don’t see how that negates explicit logic.

            I would totally agree with your last paragraph – I’m not talking about rote response. The question is, how does one understand a situation? How does one understand what a human, for instance, is, compared to a non-human? How does one go from protecting humans to protecting humanity?

            Unrelated to my previous question but key to their logic: If robots take over to allow us all to survive without killing ourselves, would we die out? Eventually the sun will die, and with it the planet – if we do not continue to change and evolve and learn new things, would the species not die anyway?

          • I’m still challenging your assumption that the robots’ programming is based on explicit logic. It’s not known how the robots were programmed to understand. I don’t think that we can assume that whatever technology is used to program the positronic brain is based on current understanding of logic and programming.

          • And I think anything so central to the plot should be explored, even if it’s just a single sentence! :)

  3. What you have t understand (and never could have known from the movie) was that Asmov wrote the books as a response to the negative stereotyping of robots in the media. they were always killing machines, monsters, or unfeeling problems. Asmov wanted to show that these could be helpful parts of our lives of which we needn’t fear.

    Have you read the books? It’s not going to satisfy you on a programming level, but it’s not so much that the laws are executed first, but rather that all of the programming is built upon the 3 laws. It was this way that they could never be circumvented. In a sense, the 3 laws, were the language they were programed in. Like I said, it won;t satisfy someone wanting it to work with their understanding of programming – it’s a parable.

    • See, that doesn’t make sense to me because, as I understand it the shorts were frequently about how those laws weren’t sufficient, how they could be broken or twisted.

      • Not really. A bunch of the stories were written by a machine psychologist, so there is a lot of aberrant behavior, but it’s isolated instances and never some violent freak-out. Other stories are about how a little girl so misses her robot she can;t ever be happy ever without him. Or the two laws contain a contradiction the human was not aware of and the robot can’t resolve them. Or how even when they don’t think they are following the 3 laws they are doing so unconsciously. It’s mostly a book of philosophy/logic/morality. The closest thing was when they started evaluating the first law globally and liberally, but that was still not swarming violent rampage time.

        If you don’t want to read the book, or the illustrated screenplay, you should at least check out the wikipedia entry. It has a short description of the stories.

  4. A good place to start, and some of my favorites of the Good Doctors works are the “Lije Bailey” novels. They are detective stories involving Robots. The Caves of Steel, The Naked Sun, and The Robots of Dawn. These where all set after the short stories that comprise I, Robot. They are a quick read and also deal with instances of a robot “freezing up” because of a struggle to compute the proper action when a conflict arises between The Laws. They fit nicely with your icon. They are also another step in the long line of novels he loosely ties together that culminates in the Foundation Series.

    I loves the Doctor. He’s my favorite author.

  5. I knew it was Wash, I mean, Alan. :)

  6. Ironically, I also watched I, Robot the morning of May 5th, and discovered afterward that Alan Tudyk played Sonny.

Comments are closed.