Opinion

Netflix’s Bandersnatch Brings Up Ethical Questions About BMI Prosthetics

Can brain-machine interface prosthetics hijack the free will of users, like the viewer does the protagonist of Netflix’s new interactive film? And if yes, who is responsible for the resulting actions?

Dov Greenbaum 10:2711.01.19
Bandersnatching: to be acted upon by an external technological force, thereby removing free will and/or privacy.

 

Spoilers ahead! Netflix’s groundbreaking movie, Black Mirror: Bandersnatch, released last month, is a choose-your-own-adventure interactive film, part of the dystopian Black Mirror series. A key element of the film is the breaking down of the fourth wall: the movies’ protagonist acknowledges that he has lost his free will to an outside force—the viewer. In this novel setup, the viewer has the option to induce the protagonist to commit violent acts, all ostensibly against his free will.

 

For daily updates, subscribe to our newsletter by clicking here.

  

Free will is a foundational concept in most societies. The idea is especially fundamental to traditional criminal law and is codified by many penal codes. For example, in many Western jurisdictions, a punishable crime requires both mens rea and an actus reus--an evil mind and an evil action. An evil action without an evil mind is typically not punishable. To wit, the law acknowledges, under certain conditions that a suspect may act against her will and be innocent of even the most heinous crimes. In one notable case, a Canadian man was exonerated from the gruesome murder of his mother-in-law based on the defense of sleepwalking. Without input from the conscious mind, the unconscious mind’s actions remain unpunishable. This is a worthwhile rule, even with its occasional misuse.

Bandersnatch movie. Photo: Netflix Bandersnatch movie. Photo: Netflix

 

But this central concept is constrained. The law acknowledges only the binary condition: either you have conscious control over your actions and you are punishable, or you do not, and your involuntary actions are not punishable. Problematically, this simplistic idea effectively disregards modern neuroscience which allows for a spectrum of consciousness.

 

So far, this apparent contradiction between the law and the corresponding scientific reality hasn’t raised substantial legal issues. One area that could challenge this status quo, however, is an instance where, like in Bandersnatch, there is a loss of free will to technological innovation. In the real world, being bandersnatched could come about via the growing field of brain-machine interfaces (BMIs), electronic devices that interact directly with the brain.

 

Research has shown that BMIs, particularly those associated with prosthetic devices, can work optimally when they are situated in the part of the brain termed the posterior parietal cortex (PPC). These prosthetics devices, when wired via a BMI directly into this area of the brain, can allow amputees and people with limited mobility to use their thoughts to control prosthetics, even to manipulate objects within their immediate extrapersonal spaces.

 

How do they work? The PPC is known to be associated with the preplanning component of our actions: not necessarily part of the consciousness, but also not necessarily part of the unconscious. Once connected to the PPC via a BMI, particularly when augmented with artificial intelligence with predictive capabilities, the BMI can tap into preconscious pre-planning neural impulses within the PPC, predicting and triggering actions that are eventually enacted by the prosthetic.

 

This evolving technology raises legal and ethical concerns relating to the duties of care, negligence, and criminal intent. In tapping directly into the PPC preconscious area of the brain, the interface may circumvent essential command and control elements downstream of the BMI, in the conscious part of the brain. Those command and control elements being particularly relevant for our conception of free will and criminal guilt, both which rely on the conscious control over our actions.

 

Consider two situations: The first wherein an amputee fitted with a BMI mediated neuro-prosthetic harms an individual after a signal from the BMI makes the prosthetic move and strike the victim. In the alternative, consider a situation wherein an individual uses her prosthetic daily for the same motion, such as picking up a cup of coffee. The AI in the device quickly learns that a particular signal from the brain, via the BMI to the prosthetic, should be interpreted as a command to move the arm to get a cup of coffee. However, on one occasion, the noise to signal ratio from the BMI is too difficult to interpret, and the AI predicts and executes the movement without a clear command from the amputee’s brain. This results in harm caused to the amputee by her own prosthetic arm. In either case, is the amputee at fault? Does it make a difference if the victim has a tort or criminal claim against the amputee?

 

In assessing liability in both criminal and tort law, we ought to consider all the relevant stakeholders, including the programmers and manufacturers of the prosthetics. We must also consider the confounding user who has chosen to integrate a complex device with his or her mind given the potential risk posed by the uncertainties of biological systems.

 

The use of AI in the second scenario further muddies the waters: mediating a BMI via artificially intelligent software —particularly predictive AI software that may even take over in the event of too high a noise to signal ratio from the neurons— can confound issues of cause and effect necessary for ascertaining and assigning responsibility in tort law. This is further problematic given that some jurisdictions are even beginning to recognize AI as a separate legal entity, for example concerning autonomous vehicles.

Rather than deal with the complexities of the current issue, justice systems could apply a strict or similar product liability rule for these types of devices, both for criminal as well as tort cases, although at the risk of potentially disincentivizing commercial products in this vital area.

 

Alternatively, a specialized rule could spread both the risk and the liability across the multiple stakeholders such as the manufacturers, programmers, and amputee users.

 

Disconcertingly, the courts could also try and determine definitively whether the individual or her prosthetic bears the criminal blame for an action, but it will require the collecting and analyzing of neural impulses, effectively requiring the criminal law system to hack into individuals' brains to assess guilt, severely encroaching on the privacy of the citizenry, and potentially committing a bandersnatching themselves.

 

Prof. Dov Greenbaum is the Director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies, at Israeli academic institute IDC Herzliya.

Cancel Send
    To all comments