They should be.
You see, we have the idea that Asimov's Three Laws of Robotics are real:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Recently, a robot committed suicide. Really.
One of those room-cleaning robots turned itself on, pushed a pot off the stove, and sat there and died.
Now, the important thing isn't that a cleaning robot was up on the counter near the stove. The important thing isn't that this was in Austria, although cleaning up a house in Austria would depress me. No, the important thing is that it shows that the Three Laws are fiction.
So, a robot CAN harm itself (Third Law). Then, a robot could disobey orders (Second Law). And, a robot can injure a human (First Law). That means that a robot can turn on you. That means robots can go crazy and kill themselves. Yep. Muslim robots. Or Branch Davidian robots. Or People's Temple robots. Or Solar Temple robots. Or left-wing Obamabot-bots.
Robots can go crazy and kill you, and don't care if they get hurt in the process. Don't trust a robot, that's the message.
Either that, or don't put robots up on the counter near the stove. Grab a Bounty and wipe up the Cheerios, you lazy slob.