A philosophy of ethics in the age of digital intelligences

Thursday, October 9th, 2008

I think about the future a lot. Okay, that’s a lie — I think about the future all the time. I place the blame on the vast quantity of science fiction books I read during my formative years. But it really has been the highest privilege imaginable to watch the future unfold right before my eyes these past twenty-three years, even if it hasn’t always happened quite as we imagined it would. And the best part is, it’s a privilege that never ends! Just looking at what’s on my desk right now, I have 500 GB of storage in a hand-portable format. Take that back to just one decade ago and no one would even believe it.

The encroachment of the future upon the present has been occurring at an accelerating rate, too fast for any single person to keep up with it all. Take any given scientific field — only the experts in it are even aware of all of the groundbreaking research, while people outside that field are entirely clueless (witness the recent unfounded public backlash against the Large Hadron Collider, for instance). This lag time between initial discovery and general synthesis of knowledge isn’t getting any shorter, even as new inventions continue coming along at a breakneck pace. It’s a recipe for severe discrepancies between disparate areas of knowledge.

One area we haven’t normalized with scientific progress yet is ethics. Our legal system, for example, is built entirely around the assumption that humans are the only intelligent actors. Harm inflicted against humans is thus either caused by other humans (whether intentionally or not), by accident, or by nature. The latter two do not merit punishments (though in some cases compensation is awarded), while the first category is dealt with mainly through punishments that are geared to work on people, such as incarceration. But as computers continually grow exponentially more powerful according to Moore’s Law, the categories begin to break down.

Look at the case of Robert Williams, an automotive factory stockroom worker who in 1979 became the world’s first robot fatality when a robot’s arm, entirely lacking in any sort of safeguard, smacked into him at full speed, killing him instantly. The courts (rightfully) considered that robot as a simple tool, and the jury found the robot’s manufacturers negligent and awarded Williams’ family $10 million. Even today, robot fatalities are dealt with in the same manner: they are either declared to be entirely accidental, or the manufacturer of the robot is found to be at fault. They have yet to find the robot itself, acting intelligently and on its own, to be at fault.

Read the rest of this entry »

The highest-editing zombie bot on Wikipedia

Monday, May 26th, 2008

I stopped actively editing Wikipedia more or less one year ago. Naturally, I haven’t stopped editing completely, as I still read Wikipedia nearly every day in the pursuit of my own edification. But I no longer seek out thankless administrative tasks to perform, nor do I browse articles solely to find a way to contribute some writing. In that way I’m much more like the casual reader who occasionally fixes a typo, though the casual reader also doesn’t have the ability to delete articles, block users, and protect pages (ah, the privileges of being an administrator). But I don’t much use those abilities anymore, so it matters little.

In addition to doing lots of editing and administrative tasks (page may take awhile to load), I also spent a good amount of time hacking on programs for Wikipedia. Some, such as the userbox generator (don’t even ask), were purposefully silly. Others, such as my work on the PyWikipediaBot free software project, were more useful. In addition to my work on that bot framework, I wrote quite a few bots, which are programs for making automated edits. By the time I (mostly) retired from Wikipedia, I had put many hours into those bots, and I couldn’t bear to just shut them down. So I left them running. They’ve been running now for over a year, unattended for the most part, and have been remarkably error-free all things considered. I have variously forgotten about them for months at a time, and only remembered them when my network connection chugs for an extended period of time (long “Categories for deletion” backlog) or when my server’s CPU utilization pegs (bot process gets stuck in an endless loop). So yes, there is a zombie bot editing Wikipedia, and it even has administrative rights that it uses quite frequently!

All of these bot programs that I wrote run under one Wikipedia user account, Cydebot. That account was the first account on any Wikipedia project to break one million edits. The total currently stands somewhere at a million and a quarter (proof), though it has been out-edited by one other bot account by now. But just think about the enormity of that number. At one point Cydebot had a single digit percentage of all edits to the English Wikipedia. You can’t say that’s not impressive, especially considering how ridiculously massive Wikipedia is. Yet being a bot operator was largely unsung work. The only time I really got noticed for all the effort I was putting into it (and never mind the network resources involved, especially when I was running AntiVandalBot, which downloaded and analyzed the text of every single edit to Wikipedia in real time) was when yet another person thought they were the first to realize that Cydebot was using administrative tools and deemed it necessary to yell at me about it. Wikipedia has this cargo cult rule that “admin bots aren’t allowed” — even though people have been running them for years. I’ll grant that it’s schizophrenic.

So after continuing to run Cydebot for this long, I’m not going to stop now. I haven’t put any effort into Cydebot for over a year besides occasionally updating the pyWikipediaBot framework from SVN, killing pegged bot processes, and rarely modifying the batch files for my bots when someone points out that the associated pages on Wikipedia have changed. I don’t have the time (nor the desire) to put any further serious development work into Cydebot, so at some point things will finally break and Cydebot will no longer be able to do any work. But it’s already gone for over a year performing all sorts of thankless tasks on Wikipedia that no human wants to be bothered with; why not let it continue going and see how much longer my favorite zombie bot can continue at it for?

If you want to track the continuing edits of a zombie bot on Wikipedia, you can do so here. So the next time you are idly reading Wikipedia, remember that, not only are there bots behind the scenes that are making millions of automated edits, but some of them are zombies that have been running largely unattended for months, if not years. Wikipedia is built, in no small part, upon zombie labor.

The farming robots are coming

Saturday, June 23rd, 2007

The farming robots are coming, and it’s about damn time. For too long have humans unnecessarily devoted themselves to manual labor. Any task that can be done just as well by a robot, should be done by a robot, to go free up that human to do something only a human can do. I reject the argument that robots should not replace human labor just so we can keep people employed — inefficiency is bad for inefficiency’s sake.

But back to the linked article. Growers in California have invested large amounts of money in research into robotic fruit pickers: think oranges, grapes, apples; anything, really. The reason is that migrant workers have been so spotty of late (whether they’ve been having troubles getting across the border, who knows) that many farms had just had all of their fruit rot away on the vine/tree because nobody was there to pick it. Clearly, robots owned by the farm could do a much better job. The technology they’re using is really complex, in case you had any lingering doubts about these being true robots rather than mere mechanical harvesters:

The two robots would work as a team: one an eagle-eyed scout, the other a metallic octopus with a gentle touch. The first robot will scan the tree and build a 3-D map of the location and size of each orange, calculating the best order in which to pick them. It sends that information to the second robot, a harvester that will pick the tree clean, following a planned sequence that keeps its eight long arms from bumping into each other.

The Vision Robotics engineers are currently building the scout. They expect to have a prototype ready next year, with the harvester to follow two or three years later. Baskin says he doesn’t expect the mechanical systems to pose any serious problems. The hard work is writing the software. After the scout robot makes a 3-D map of the tree, it has to evaluate each piece of fruit. What size is the orange? What color is it? Does it have black spots on it? “It’s a question of gathering the information, and then judging whether it meets the parameters that are equal to a good orange,” Baskin says.

3D maps of fruit trees? Calculating optimal routes for most efficient picking of fruit? Freaking awesome, that’s what that is.

Learning an important writing lesson early on

Friday, March 23rd, 2007

Way back in fourth grade (over half a lifetime ago) we had the occasional in-class time available for creative writing. We could write whatever we wanted, so long as we were, indeed writing. I had read lots of fantasy childrens’ literature and two of the Lord of the Rings books by then, but what I was really interested in was science fiction. I hadn’t yet read any serious adult science fiction (my first scifi book would be Rendezvous With Rama by Arthur C. Clarke when I was ten), but I was still interested in writing scifi. So I tried writing some scifi during one of my fourth grade creative writing periods, and I learned a very important lesson that has stuck with me ever since.

I tried to write my story about a group of nearly microscopic collaborative robotic drones that were investigating some sort of radioactive material in a laboratory. The idea itself is very cool and something that is inevitably going to happen as soon as miniaturization technology grows good enough; how could the government possibly resist? But I made one fatal flaw in the story. The robotic drones were not just the main characters, they were the only characters.

It simply didn’t work, and even though I was only in fourth grade, I realized it wasn’t working after just a page. It was boring. It was painful to write. I was writing them like the real robots they were supposed to be: small, incapable of much intelligence. The dialog went something along the lines of, “Probe B-46 said, ‘Target identified. Closing distance to 2 meters.'” It was terrible. I didn’t really understand why it was so bad, but I recognized it as such, and all of my future endeavors to write fiction always included human characters in them. I had stumbled across a pretty fundamental rule of science fiction that I only explicitly understood much later.

The rule is simple: Despite how fantastic or futuristic the setting of the story is, humans must always be the main characters. This is malleable somewhat, in that a story can work if it has human-like robots or aliens with understandable proxies for human emotions. But you can’t just write a story solely about machines, or truly alien aliens whose motivations and feelings cannot be deciphered. The story must always be grounded with humans. That’s what we all are, and what we all empathize with. You cannot have a science fiction story without humans (or similar analogs) just like you cannot have a forest without any trees.

A few years later, when I was reading Isaac Asimov’s biography, it finally dawned on me. He said something to the effect that he didn’t consider himself a science fiction writer; rather, he considered himself a writer of human trials and tribulations, with his most-oftenly chosen milieu just happened to be science fiction. He also pointed out that even though he was perhaps most well known for his robot short stories and novels, each of those stories always contained a main human character which the reader used as a sort of porthole to understanding the story. Asimov’s robots are very capable, but they aren’t intelligent in the same sense as one could call a person intelligent; they need rigorous codified laws such as the Three Laws of Robotics. They are alien, and without a recognizable human character to ground the reader, the stories simply would have flopped. But Asimov was a very smart guy, and he always included strong human characters, such as Dr. Susan Calvin, to ground the stories.

So, back in fourth grade, I learned an important lesson that it took me a few years afterwards to really understand and explain. But I did learn the lesson. I haven’t tried to write a story without human characters since, and my writing has definitely been better for it. Because nobody wants to read a story that reads like a transmission log between computer programs. And Cydebot, Post complete.

Wikipedia gets CAPTCHAs for anonymous edits

Thursday, February 22nd, 2007

Yesterday, image CAPTCHAs were enabled for all anonymous edits on all Wikimedia Foundation wikis (including the popular encyclopedia Wikipedia). I noticed this by chance because I’m in a computer lab right now and found some vandalism on an article linked from the main page, but didn’t want to take the time to log in first. However, by the time I finished typing in the CAPTCHA, an admin had already reverted the vandalism. Drat.

The reason for the CAPTCHA is that we’ve been having some spam problems on-wiki recently, with spammers using automated bots to add links to dozens of pages before they end up being blocked. We have a global spam blacklist that does a good job of stopping spammers dead, but all of their edits still have to be manually reverted, which is a pain. Hopefully this new change will alleviate some of that. This change will basically stop all anonymous bot edits (including legitimate bots that get logged out by accident). It will also stop vandalism bots that are running anonymously, which we’ve seen a few of.

Unfortunately, this change still doesn’t do anything against spamming/vandalism being done using registered user accounts. Yes, you do have to pass an image CAPTCHA to register an account too, but that’s only once per account rather than on every edit, so people could conceivably manually register a bunch of accounts and then hand the account details off to their bots.

What I’d like to see is CAPTCHAs on the first twenty edits of each new user (in addition to each anonymous edit). This would make automated spam/vandalism impossible.

One thing I’m worried about though — are we making the barriers to edit too high? Anonymous edits do contribute significantly towards writing the encyclopedia. There’s a trade-off between making it hard for automated ne’er-do-wells and putting a burden on legitimate editors who just can’t be bothered to login or register an account. I hope we haven’t gone too far in one direction.

Update: It looks like CAPTCHAs have been disabled; read the comments for more information.