SCIENTIFIC METHOD / SCIENCE & EXPLORATION If software looks like a brain …

brain-chip2

Long the domain of science fiction, researchers are now working to create software that perfectly models human and animal brains. With an approach known as whole brain emulation (WBE), the idea is that if we can perfectly copy the functional structure of the brain, we will create software perfectly analogous to one. The upshot here is simple yet mind-boggling. Scientists hope to create software that could theoretically experience everything we experience: emotion, addiction, ambition, consciousness, and suffering.

 

“Right now in computer science, we make computer simulations of neural networks to figure out how the brain works,” Anders Sandberg, a computational neuroscientist and research fellow at the Future of Humanity Institute at Oxford University, told Ars. “It seems possible that in a few decades we will take entire brains, scan them, turn them into computer code, and make simulations of everything going on in our brain.”

Everything. Of course, a perfect copy does not necessarily mean equivalent. Software is so… different. It’s a tool that performs because we tell it to perform. It’s difficult to imagine that we could imbue it with those same abilities that we believe make us human. To imagine our computers loving, hungering, and suffering probably feels a bit ridiculous. And some scientists would agree.

But there are others—scientists, futurists, the director of engineering at Google—who are working very seriously to make this happen.

brain-chip3

For now, let’s set aside all the questions of if or when. Pretend that our understanding of the brain has expanded so much and our technology has become so great that this is our new reality: we, humans, have created conscious software. The question then becomes how to deal with it.

And while success in this endeavor of fantasy turning fact is by no means guaranteed, there has been quite a bit of debate among those who think about these things whether WBEs will mean immortality for humans or the end of us. There is far less discussion about how, exactly, we should react to this kind of artificial intelligence should it appear. Will we show a WBE human kindness or human cruelty—and does that even matter?

The ethics of pulling the plug on an AI

In a recent article in the Journal of Experimental and Theoretical Artificial Intelligence, Sandberg dives into some of the ethical questions that would (or at least should) arise from successful whole brain emulation. The focus of his paper, he explained, is “What are we allowed to do to these simulated brains?” If we create a WBE that perfectly models a brain, can it suffer? Should we care?

Again, discounting if and when, it’s likely that an early successful software brain will mirror an animal’s. Animal brains are simply much smaller, less complex, and more available. So would a computer program that perfectly models an animal receive the same consideration an actual animal would? In practice, this might not be an issue. If a software animal brain emulates a worm or insect, for instance, there will be little worry about the software’s legal and moral status. After all, even the strictest laboratory standards today place few restrictions on what researchers do with invertebrates. When wrapping our minds around the ethics of how to treat an AI, the real question is what happens when we program a mammal?

“If you imagine that I am in a lab, I reach into a cage and pinch the tail of a little lab rat, the rat is going to squeal, it is going to run off in pain, and it’s not going to be a very happy rat. And actually, the regulations for animal research take a very stern view of that kind of behavior,” Sandberg says. “Then what if I go into the computer lab, put on virtual reality gloves, and reach into my simulated cage where I have a little rat simulation and pinch its tail? Is this as bad as doing this to a real rat?”

As Sandberg alluded to, there are ethical codes for the treatment of mammals, and animals are protected by laws designed to reduce suffering. Would digital lab animals be protected under the same rules? Well, according to Sandberg, one of the purposes of developing this software is toavoid the many ethical problems with using carbon-based animals.

To get at these issues, Sandberg’s article takes the reader on a tour of how philosophers define animal morality and our relationships with animals as sentient beings. These are not easy ideas to summarize. “Philosophers have been bickering about these issues for decades,” Sandberg says. “I think they will continue to bicker until we upload a philosopher into a computer and ask him how he feels.”

While many people might choose to respond, “Oh, it’s just software,” this seems much too simplistic for Sandberg. “We have no experience with not being flesh and blood, so the fact that we have no experience of software suffering, that might just be that we haven’t had a chance to experience it. Maybe there is something like suffering, or something even worse than suffering software could experience,” he says.

Ultimately, Sandberg argues that it’s better to be safe than sorry. He concludes a cautious approach would be best, that WBEs “should be treated as the corresponding animal system absent countervailing evidence.” When asked what this evidence would look like—that is, software designed to model an animal brain without the consciousness of one—he considered that, too. “A simple case would be when the internal electrical activity did not look like what happens in the real animal. That would suggest the simulation is not close at all. If there is the counterpart of an epileptic seizure, then we might also conclude there is likely no consciousness, but now we are getting closer to something that might be worrisome,” he says.

So the evidence that the software animal’s brain is not conscious looks…exactly like evidence that a biological animal’s brain is not conscious.

Virtual pain

Despite his pleas for caution, Sandberg doesn’t advocate eliminating emulation experimentation entirely. He thinks that if we stop and think about it, compassion for digital test animals could arise relatively easy. After all, if we know enough to create a digital brain capable of suffering, we should know enough to bypass its pain centers. “It might be possible to run virtual painkillers which are way better than real painkillers,” he says. “You literally leave out the signals that would correspond to pain. And while I’m not worried about any simulation right now… in a few years I think that is going to change.”

This, of course, assumes that animals’ only source of suffering is pain. In that regard, to worry whether a software animal may suffer in the future probably seems pointless when we accept so much suffering in biological animals today. If you find a rat in your house, you are free to dispose of it how you see fit. We kill animals for food and fashion. Why worry about a software rat?

One answer—beyond basic compassion—is that we’ll need the practice. If we can successfully emulate the brains of other mammals, then emulating a human is inevitable. And the ethics of hosting human-like consciousness becomes much more complicated.

Beyond pain and suffering, Sandberg considers a long list of possible ethical issues in this scenario: a blindingly monotonous environment, damaged or disabled emulations, perpetual hibernation, the tricky subject of copies, communications between beings who think at vastly different speeds (software brains could easily run a million times faster than ours), privacy, and matters of self-ownership and intellectual property.

brain-chip

All of these may be sticky issues, Sandberg predicts, but if we can resolve them, human brain emulations could achieve some remarkable feats. They are ideally suited for extreme tasks like space exploration, where we could potentially beam them through the cosmos. And if it came down to it, the digital versions of ourselves might be the only survivors in a biological die-off.

Arstechnica