prog: (khan)
prog ([personal profile] prog) wrote2004-06-11 02:18 pm

(no subject)

A friend told me yesterday that the plot of the I, Robot movie will center on robots who break the Three Laws of Robotics (presumably within minutes of the concept being introduced to the audience) and go on killing sprees. And only wisecracking tough-as-nails Will Smith can stop them and so on.

I was like to cry. He said that he actually did cry, when he saw the movie's trailer. And he's going to go see it anyway, following some bizarre need to watch the ship going down, I suppose. I won't be joining him.

(Background: Asimov's robots, even malfuctioning or rogue ones, never ever ever never ever broke the Three Laws (please correct me if I'm wrong here), which prevented them from harming people, actively or passively. They made interesting characters because of this constraint (both in the sense of the limit on their behavior and in the constraint Asimov gave himself as their author), and a frequent plot device involved humans fearing robots because they couldn't believe that the Three Laws were as absolute and unbreakable as they actually were.)

[identity profile] rikchik.livejournal.com 2004-06-11 11:20 am (UTC)(link)
I've heard that the movie is completely unrelated to Asimov and his laws, and that the name was added at the last minute.

[identity profile] treacle-well.livejournal.com 2004-06-11 11:44 am (UTC)(link)
I think you are mostly right. In some of Asimov's story, a robot occasionally appears to be breaking one of the rules, but upon further investigation this proves to be not so--there's just some factor that was non-obvious to the average human.

[identity profile] hrafn.livejournal.com 2004-06-11 11:54 am (UTC)(link)
Ugh :(

[identity profile] tahnan.livejournal.com 2004-06-11 11:58 am (UTC)(link)
Well--the Three Laws have at this point grown beyond Asimov and I think may be taken seriously by AI researchers.

As for breaking the three laws: there was a story in which a robot had been specifically programmed to not obey the First Law, because it worked with humans under conditions that would appear to it to require self-destruction to save humans, even though the humans weren't in any real danger. Susan Calvin rigged up a complex scenario to pick out which one of a hundred robots didn't have the First Law.

But by and large, robots wantonly ignoring the laws is more an Alfred Bester thing. These aren't laws you can break, like "Thou shalt not murder"; these are laws like gravity.