Who is to blame if a robotic arm breaks your hand?
Scientists review ethical dilemmas and risks of neuroprosthesis and brain machine interfaces
They are not yet in stores, but like self-contained cars or artificial intelligence, neuroprostheses and other brain-controlled devices promise to improve people’s lives. However, like vehicles and robots, these brain-machine interfaces raise some moral dilemmas and new dangers. Who is responsible if a robotic arm breaks the gripping hand? How to ensure that no one hacks the brain of a person connected to a machine?
A group of neuroscientists, neuro-engineers, and even philosophers have wanted to answer these and other questions in an article published in the journal Science. The promises offered by brain-machine interfaces (BMIs), especially to people with physical disabilities or paralysis, are so many that they can obscure their dangers. The reflection of the scientists highlights three of them: One is the responsibility of the actions of a neuroprosthesis, who is responsible for the consequences? Another risk is the threat to the privacy of the person, which exposes the most intimate he has, his thoughts. A third draws a dystopia in which, using the connection to the machine, someone takes over the brain of another.
“Although we still do not understand well how the brain works, we are approaching the moment when we will faithfully decipher certain brain signals,” says the director of the Wyss Center for Bio and Neuroengineering (Wyss Center, Switzerland) and co-author of the article, John Donoghue. “We should not rely on what this can mean for society. We should be very careful about the consequences of living with brain-controlled semi-intelligent machines, and we should prepare mechanisms to ensure that they are used safely and ethically,” he adds.
The determination of responsibility is already creating problems for non-autonomous robots such as the surgical robot Da Vinci
In recent years, a series of lawsuits have been filed in the United States against Intuitive Surgical, the maker of the surgical robot Da Vinci, for errors in its operations. Although Da Vinci is not autonomous, operating as a robotic surgeon extension, it illustrates the complex problem of responsibility. And it is not always easy to determine if a harmful act is the result of an error, recklessness or is intentional. An example of a driver failing the brakes may illustrate the problem.
“The driver and his brain are one, and since all behaviors originate in the brain, the legal responsibility of the driver and the brain is identical,” recalls Wyss Center neuroscientist and co-author of the study, Niels Birbaumer. “However, the problem is compounded when the BMI is using brain signals over which the driver has no conscious control. Here we have a conscious switch that can stop an action caused by an unconscious process of the driver,” adds this leading teacher Years researching with BMI and recently achieved that four people with the total captivity syndrome communicated with the outside.
To address these problems, the authors of the article propose that the brain-machine interfaces have a veto system whereby, if necessary, a given instruction can be reverted to the machine in milliseconds.
The connection of the brain to a machine could pose a risk to the privacy of the connected
There are others who do not see it as complicated. This is the case of the researcher at Chalmers University of Technology (Sweden). Max Ortiz Catalán: “In matters of responsibility, there is not much difference between these new technologies and driving a car. A hand prosthesis can cause you enough damage during a salute if the patient decides to close it with all his strength, just like If a motorist decides to run you over. ”
This Mexican, half engineer, the half neuroscientist, implanted a robotic arm to Magnus, a truck driver who had lost him in accident years ago and who, since 2013, can take a drill, work or play with his children. “Responsibility is divided between the creators of the technology and the users and, as there are laws that try to mitigate the possible damages of a technology, for example not to use your cell phone and to drive, all the neural technologies have warnings and indications of use, In addition to all internal security measures, “explains Ortiz Catalán.
A way to track responsibility is offered by the technology itself. Most of these systems record all the activity that goes from the machine to the brain and in the opposite direction, functioning as surveillance cameras. But this poses a new danger: the threat to privacy. So far, except for cases like the truck driver Magnus, studies with BMI have remained in laboratories. Signals emitted by the brain of a person with tetraplegia, ALS or captivity syndrome are left in the machine, and only researchers can read them. But what if a company started to market these systems?
The most distant but distracting danger is to ‘hack’ and manipulate the mind
Here, Ortiz Catalan, who has not intervened in this work, insists that there is not so much difference between these brain technologies and others that we are much closer. “Our phones record a lot of information about us – Google knows everything that interests you, even what you would be sorry for others to know, even things that could potentially harm you if the wrong person knew about them. Privacy”. He says.
But the risk that causes more restlessness is perhaps the possibility that someone with bad intentions can hack the brain of the person connected to a machine. The one that turns Stephen Hawking’s thoughts into words is not connected to his brain; he still uses it to control the muscles of his cheek. But in the future, you may only have eye movement or direct reading of your brain. This is where the danger would appear.
“There are several lines of research to achieve noninvasive brain stimulation,” recalls Oxford University neuroscientist Laurie Pycroft. Last month, for example, a new technique was introduced to activate deep areas of the brain without having to open the skull. As these groundbreaking jobs move forward, the issue of security will increase. For Pycroft, who last year introduced the term brain-jacking to describe the dangers that brain machine interfaces will bring, “almost any electronic device runs the risk of malicious subverting, and more complex devices tend to be at greater risk.” Future Neurostimulators, Invasive or not,