It never occurred to me till David Crotty pointed it out at The Scholarly Kitchen that by answering these are-you-sure-you’re-not-a-robot site-access questions, I’ve been working at training computers in Artificial Intelligence. Not sure that I really mind this though. If by identifying all the pictures containing traffic lights I am making Google’s self-driving cars a bit safer, that should give me a nice warm feeling. What I’ve never done, and now wonder about, is what’d happen if I intentionally misidentified the images? Would I still get access to the site: or would they really conclude I was in fact a robot and refuse to admit me? Do they catch many?

Here’s a sardonic comment by Stevie Martin (NB: not Steve):

If you don’t see a video here, please click on the title of this post in order to view it in your browser.

Isaac Asimov’s I, Robot, a collection of nine stories originally published in science fiction magazines in the 1940s and 1950, was published in book form in December 1950 by Gnome Press, which went out of business in 1962.

Famously Asimov propounds ethical rules for robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

Subsequently Asimov added a fourth rule which he called the zeroth rule, because it preceded all the others:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It’s comforting to think that future robots will be guided by such rules, isn’t it? But how come autonomous vehicles have managed to run people over? Something seems to be slipping past the Asimov rules. Train harder please.