New legislation recently proposed in the European Union aims to establish a baseline of “personhood” for robots. Many are objecting to this proposal saying it’s far too early to address such things. I say, maybe not.
Under this initiative, the growing population of industrial automatons would be classified as “electronic persons,” with specific rights as well as obligations, and their “owners” liable for paying social security. The proposal reaches into topics such as unemployment, wealth inequality, and alienation. It also raises issues of ethics as well as implications to our future vision of the data center.
Is this a futurist’s inspiration in anxious anticipation of artificial sentience? Or is it, as Reuters suggests, a creative move by lawmakers to mine tax revenue even as human workers are displaced by automation? In either case, I believe this is a worthwhile mental exercise, at the very least, as self-driving vehicles, IOT, robotic surgery, and the like signal the very beginning of artificial sentience integrating with the most basic parts of our everyday lives.
As I contemplate this draft motion within the European Commission to consider “at least the most sophisticated autonomous robots could be established as having the status of electronic personhood,” the first question that comes to mind is “why did we skip right over animals?” Anyone who keeps dogs at home, for example, recognizes the (not at all artificial) sentience in man’s best friend. Certainly many would feel the same about many other types of animals.
But the European Parliament is targeting artificial sentience directly. Why is this and what does it mean?
I have a number of robots at home already, as you may too. I have two vacuum cleaning robots that are active every day. As these machines are several generations on in the development of such things, they are quite sophisticated. They’re not sentient (as far as I know), but they do have names. In that sense, I’ve already advanced them to personhood at some level. On the most rudimentary end of the spectrum, I also have an answering machine for telephone calls. I even have an answering machine in “the cloud” for my business calls. I’ve given names to those voices too, even though the level sophistication for this automation is about as basic as one can imagine. For any of these, I can switch them off and be done with them whenever I want. I certainly can’t do that with my pets. Maybe this hints more toward human psychology in the context of automatons rather than anything to do with sentience.
What the legal qualifications are for “personhood” I don’t really know. I have to believe though, that the sentience itself can be separated from the container. Considering our bodies as vehicles for our souls, it’s much easier to imagine the transportability of the software from one computer to another, regardless of the function of the automaton itself.
It’s interesting then to imagine the data center, with all it’s whirring of fans and twinkling of lights, as a big community. In ten, twenty, maybe fifty years, the level of automation coupled with machine learning at that time may imply a massive collection of sentient applications stacked high and wide in the data center.
Indeed the DCIM and BMS functions of the data center itself are undergoing evolutions of automation with inclusion of deep machine learning capabilities. In this case I imagine the sentient data center facility as a sort of overlord of the community of digital souls living inside of it.
So at this point, with my own futurist ruminations, have fast-forwarded beyond what the proposed legislation probably intends (even as forward looking as it is in its own right.) This view is useful though, I think, if the intent of the legislation really is about maintaining tax revenue if automation eats into human labor. Extending the context implies that such a taxation plan could cannibalize the business owning the automaton unless that automaton were required to somehow go out and find its own income stream on the side.
If we are classifying “electronic persons” as an entity, which has an “owner,” what does that mean exactly from an ethics perspective? How does one “own” a “person” outside of slavery? Is an “electronic person” someone with a set of rights and privileges that are beneath those of a biological person?
At some point, the “owner” ceases to exist as the automaton is less of a “slave” and is more truly an “electronic person” able to make its own decisions about who to work for.
Of course, when we enter this area of privilege, the pendulum can swing the other way, even if you’re a robot. In Great Britain alone, there are nearly a hundred robot-related accidents in a year. Earlier this year in Germany, a robot killed a factory worker in an automotive plant, for which there is an ongoing criminal investigation. If we think that it’s too early to begin thinking about legal status of automatons, note that Asimov’s first law of robotics is already being routinely violated.
The idea of electronic personhood is a topic with many layers and nuance. Whether it’s too early to address such issues is a matter of opinion, but likely inevitable.