On the Morality of Superhumanly Intelligent Artificial Organisms

Let me explain what I mean by that title. I don’t mean to ask if the creation of superhuman artificial intelligences is a wise decision or not because a lot has already been said about that.

Also I am not going discuss or argue about whether it’s even possible to create life, sapience or superhuman intelligence artificially except to say that I believe it’s merely an engineering problem. If you disagree with this, well, I guess this rant really isn’t relevant to you.

A short clip from Terminator2, the T-1000 experiences motor control faults.

I put this animated GIF into this article a decade after I wrote it because the T-1000 remains the best example of  just how blurry the line between biology and engineering can get.

What I am going to examine is what kind of moral character will these beings have if they come into existence.

Asimov’s Laws

Cover of the Fawcett-Crest edition of Asimov's I, RobotThese you’ve probably already heard about, even if you haven’t read much science fiction. Back in 1939, Issac Asimov, imagining a future filled with artificial creatures, came up with 3 simple yet extremely powerful rules that governed the behavior of all artificial sapient beings in many of his stories. These rules were built into the hardware of each of his robot’s brains. In fact, the brains were designed in such a way that tampering with these rules damaged the brain itself.

This is analogous to building the human brain so that the anatomy of limbic system (the organs of memory and emotion), the cerebellum and neocortex are different. Imagine if your brain was organized in such a way that it was impossible for you to think in certain ways or perform certain actions.

This is analogous to building the human brain so that the anatomy of limbic system (the organs of memory and emotion), the cerebellum and neocortex are different. Imagine if your brain was organized in such a way that it was impossible for you to think in certain ways or perform certain actions.

This is probably what Asimov had in mind. His robot brains must have had a sub-organ that experience couldn’t remove. The removal of this sub-organ would render the rest of the brain inert.

Cover of Wired Magazine, with Bill Joy's now infamous warning.A month ago one of the founders of Sun Microsystems wrote an article in Wired about possible threats to the human species from advances in robotics, bio-engineering and nanotechnology and I felt compelled to say something here about it.

I’ve made it a point to follow developments in nanotech ever since a good friend told me about it in 1987. I read Engines of Creation back in 1989 so I’ve been following news in this area for a long time. Despite that, I should make plain that I am certainly no expert on the subjects. I am not a molecular biologist, chemist, software engineer or neurologist. I’m just a rabid fan of science and make it a point to be well informed.

So in this sense I am jaded. Bill Joy isn’t saying anything I haven’t heard already a zillion times on the Internet and the scientific press. Many of his arguments echo similar arguments made by Kirkpatrick Sale and Joseph Weizenbaum. Joy himself was rather shocked to discover sections of the Unabomber Manifesto resonating with his own thinking.

Then again on the other hand, his essay was, I think, a needed counterpoint to all the utopian thinking about this stuff. I think the techno-libertarian set oversimplify things. Sure, they give a few needed kicks to some sacred cows but they don’t have all the answers either.

This entry was posted in Science and Engineering, Science Fiction, The Future. Bookmark the permalink.