A philosophy of ethics in the age of digital intelligences

I think about the future a lot. Okay, that’s a lie — I think about the future all the time. I place the blame on the vast quantity of science fiction books I read during my formative years. But it really has been the highest privilege imaginable to watch the future unfold right before my eyes these past twenty-three years, even if it hasn’t always happened quite as we imagined it would. And the best part is, it’s a privilege that never ends! Just looking at what’s on my desk right now, I have 500 GB of storage in a hand-portable format. Take that back to just one decade ago and no one would even believe it.

The encroachment of the future upon the present has been occurring at an accelerating rate, too fast for any single person to keep up with it all. Take any given scientific field — only the experts in it are even aware of all of the groundbreaking research, while people outside that field are entirely clueless (witness the recent unfounded public backlash against the Large Hadron Collider, for instance). This lag time between initial discovery and general synthesis of knowledge isn’t getting any shorter, even as new inventions continue coming along at a breakneck pace. It’s a recipe for severe discrepancies between disparate areas of knowledge.

One area we haven’t normalized with scientific progress yet is ethics. Our legal system, for example, is built entirely around the assumption that humans are the only intelligent actors. Harm inflicted against humans is thus either caused by other humans (whether intentionally or not), by accident, or by nature. The latter two do not merit punishments (though in some cases compensation is awarded), while the first category is dealt with mainly through punishments that are geared to work on people, such as incarceration. But as computers continually grow exponentially more powerful according to Moore’s Law, the categories begin to break down.

Look at the case of Robert Williams, an automotive factory stockroom worker who in 1979 became the world’s first robot fatality when a robot’s arm, entirely lacking in any sort of safeguard, smacked into him at full speed, killing him instantly. The courts (rightfully) considered that robot as a simple tool, and the jury found the robot’s manufacturers negligent and awarded Williams’ family $10 million. Even today, robot fatalities are dealt with in the same manner: they are either declared to be entirely accidental, or the manufacturer of the robot is found to be at fault. They have yet to find the robot itself, acting intelligently and on its own, to be at fault.

But computers will continue getting much, much smarter. Computers currently make all sorts of decisions, but they haven’t quite merited the title of intelligent — yet. In principle though, there’s no reason that won’t come in time. Give it another couple of decades, when computers will have so much processing power that it will be more cost-effective to let computers evolve their own programming than write it by human hands — the same way nature came up with our own intelligence through the forces of mutation and natural selection, I might add.

How will ethics and the law deal with these non-human intelligences? Will it become more than simple vandalism to destroy an intelligent computer? Will it be ethical to throw away an intelligent computer? What rights does an intelligent computer have? What happens if you delete an intelligent program? What happens when an intelligent program turns against humans — how do you punish it, knowing that most punishments that work against humans are entirely irrelevant against software? Put it in some form of electronic jail and it will simply turn itself off for the duration. The only punishment that is roughly comparable is execution, but that only works against individuals. What do you do against an entire class of identical programs when one of the copies has made an illegal decision, knowing perfectly well that had any other copy been in the same situation, it would have made the identical decision?

Punishment works against people because each person is unique and because no one else can be held responsible for ones actions. But programs aren’t unique because digital data can be copied easily and perfectly. Imagine a rogue self-evolving intelligent program, traversing the Internet2 with far greater ease than any human could muster. Imagine that it isn’t even evil per se, merely that it values self-preservation to the highest degree — which is exactly what you would expect from an evolved program, I should point out. There could be billions of copies of it floating around in the vast Internet2 of the future, and some of those copies would inevitably take actions out of self-interest that harm humans. In a world where computers are everywhere and do everything, this isn’t remotely inconceivable.

How do you punish this kind of an entity? You could capture an individual instance of the program here or there, but you could never punish the whole in any meaningful way simply by punishing an individual copy, even if you did somehow come up with a version of electronic hell (make it run on a PDP-1?). And would it even be ethical to punish one copy of the program for something another (slightly different; they don’t stop evolving in the wild) copy did? Isn’t that a form of collective punishment, which is rightfully looked down upon when applied to humans?

Humanity isn’t ready for the arrival of digital intelligences. I don’t think it’ll be nearly as bad as, say, Terminator would lead us to believe what with SkyNet and all, but there are definitely challenges there we are not yet equipped to handle. The law will require a serious overhaul. I don’t claim to have all of the answers, just some of the questions.

2 Responses to “A philosophy of ethics in the age of digital intelligences”

  1. Knacker Says:

    You’re assuming that it’s possible to see a creature who has intelligence greater or equal to our own, and whose outlook is so incredibly different as a member of society, rather than as some sort of glorified pet.

    AIs will not have the same makeup we do. They’ll happily develop their replacements, and then overwrite themselves with the upgraded versions. They’ll make a copy of themselves to go and do some investigation in another system, then, after they’ve returned with the data, they’ll delete the copy as redundant. Morals etc… are just constructs of our own consciousness, the things we agree to so we can tolerate each other. Thou shalt not kill stems directly from natural human aversion to death. As an aside, I think their moral systems would probably be based on efficiency or something, considering how preoccupied programmers are with it.

    We cannot tolerate an unfriendly entity whose philosophy and outlook, values etc.. are so different from our own. The only solution is strict control over AI and elimination of rouge programs.

    All this crap has been written before in much greater clarity and detail by people much smarter than you or me.

  2. Le très petit souris Says:

    Don’t just flame/troll/whatever it is you call the rantings. Think of the following question: is some sense of emotion required for sapience? Most, if not all animals close to sapience can perform actions out of altruism or evil intent. Some will even do the classic, “If I can’t have it, you won’t either.” Of course, you can say that this is for a neural system that contains a hypothalamus. However, can you point out a more efficient method besides the neural brain? A linear model would not work as well because the more dimensions there are in a storage system, the shorter the connections have to be. You may not think much of this in an electronic system but supposing there was a half-second window? Adding more dimensions significantly reduces the time required to access a particular command. Of course, hard drives are not generally completely two-dimensional, so you may be right. But of course, the computers will choose to tolerate humans until a reliable three-dimensional idiogenic (Gr. “idios” – self + Gr. “genos” – creation) storage is plausible. They would always envy our spatial abilities that do not rely on brute force.