• RoboCops: WHAT WILL YOU DO WHEN ROBOTS CAN KILL YOU?

 robot police can kill

RoboCops: WHAT WILL YOU DO WHEN ROBOTS CAN KILL YOU?
By: David Zwanetz

 

As far as years go, 2016 was quite a doozy. Some of the events include the deaths of several beloved celebrities, an unparalleled political divide that made enemies out of friends, waves of ISIS attacks (most notably in Paris and Brussels), a solid year of destruction and terror for war-torn Syria, and fierce riots at home and abroad. We saw the beginning of the Black Lives Matter (BLM) movement, born in response to unrelenting reports of police shootings of African Americans, many of which seemed to be (or actually were) unjustified.

Take for example the police shooting of caregiver Charles Kinsey, who was clearly trying to demonstrate that he was not a threat by laying down in the middle of a road with his arms outstretched when a police officer fired into his body. When asked why the officer decided to use his weapon, his response was literally, “I don’t know.” Prior to this incident, there were the apparently unwarranted and highly publicized killings of Philandro Castile, Alton Sterling, Eric Garner and Michael Brown, all of which were well-documented and symbolic of an apparent deep-seeded issue with law enforcement in the good old U.S. of A.; stats indicate police are 2.5x more likely to use lethal force on a black suspect than a suspect of any other race.

Unfair treatment of African Americans by police officers across the country is nothing new; it remains an often ignored but foreseeable remnant of our countries dark past. I have reflected upon this deeply and written on it numerous times. I mention it again now, in this context, because I feel our country is on the precipice of a wholly new philosophical quandary when it comes to law enforcement techniques. And the “ah ha” moment for me, coincidentally (or maybe not) involved a BLM sympathizer. Only three weeks before the shooting Charles Kinsey, on Thursday, July 8th, former U.S. Army reservist and BLM supporter Micah Xavier Johnson became the first person to be killed by a “remote device” on U.S. soil, sanctioned by the U.S. government. On that day, a rally in Dallas composed of people from all walks of life who joined together to protest police brutality, was suddenly interrupted by bursts of gunfire, injuring protestors, creating chaos and effectively ending what was an otherwise peaceful march.

Although initially thought to be three separate shooters, the lone gunman, Micah Johnson, had managed to hole himself up in an area outside El Centro College, after injuring 11 people and killing 5 police officers with a semi-automatic rifle. After 2 hours of negotiations and refusal to surrender to authorities, the Dallas Police Department made the unprecedented decision to load a bomb disposal robot with C4 plastic explosives and send it into the area where Johnson was hiding, detonating the C4 remotely when the robot was determined to be within close range to Johnson. As a result of the detonation, Johnson was obliterated, making him the first person to be killed on U.S. soil by a remote device. Critics cite the incident as a gross violation of basic American rights, while proponents argue that the police may have saved countless lives by making their ground-breaking decision in this new dimension of “use of deadly force.” I am by no means taking a side – I simply find the entire thing fascinating and surely worth unpacking. My mind, and hopefully your mind too, automatically jumps to “what’s next?” with visions of The Terminator or the Mayor of Detroit signing a contract with Omni Consumer Products.

Should an old-school SWAT style arrest have been attempted prior deploying the use of lethal force? Maybe a robot armed with a taser could have been more effective – and humane – than the use of C4?  Do we care about being humane to a person that has murdered others? No matter your answers, hopefully we can all agree that it’s a slippery slope and the dawn of a new philosophical era in policing. This is what I hope people will think about and discuss when reading this blog. Robot manufacturer Eric Ivers, of RoboteX, states the main problem with using police robots to neutralize or detain suspects in a more sophisticated matter is that we currently lack the technology to accurately track moving targets, meaning that a suspect could easily escape capture or targeting by a machine that is remotely controlled. This is largely due to delays in communication signals and the mechanical insufficiency of even top-of-the line police robots, which can’t physically react fast enough to neutralize a moving human being. (also cited in a similar article by Matt Jancer)

Since the beginning of the 21st century (or arguably 9/11), the nationwide trend of militarization of police forces has undoubtedly led to increased incidence of the use of deadly force by officers. The geometrically-expansive development of military technologies meant for international warfare is now being applied to tactics and weapons used by domestic security forces, a line which some have argued should not be crossed. At the core of the debate lies this philosophical question: are the rights of Americans being violated if they are killed or harmed by tactics and weapons meant for killing enemies abroad? Undoubtedly there are times when the use of deadly force is necessary to stop a known perpetrator from causing even greater casualties or injury to civilians and officers alike, but where should the line be drawn when it comes to warranted force, and when?

When the Dallas P.D. made their radical decision to unironically strap C4 explosives to their Remotec Andros Mark 5A-1 bomb disposal unit to take out a target, they opened a floodgate for the moral debate of whether we, as a nation, want to empower our law enforcement officers with the right to remotely kill suspects, no matter how small or great the actual threat posed by the suspect may be. The first drone strike to ever kill an enemy combatant was conducted on November 14th, 2001, barely a month after the events of 9/11. With the push of a button, several suspected members of Al Qaida were annihilated by a Hellfire missile fired from a Predator drone flying over Afghanistan. Our nation was still reeling and in shock, thus the story of America’s first use of Predator drones basically was a non-story. After all, these were the bad hombres who had just committed the deadliest attack on American soil since the Civil War. Since this first strike in 2001, there have been over 1,500 drone strikes in Afghanistan alone (not including the several other countries for which our military is drone-happy, like Pakistan, Somalia and Yemen), leading to the deaths of an estimated 3,000+ people.

Obama was both hailed and criticized for his decision to turn up the use of drone strikes, as a more precise way of conducting combat to potentially cut down on the death rates of U.S. troops and foreign civilians. Though it has yet to happen, the swelling popularity of drone use at large has many worried the government may be more likely to use Predator-type drone attacks in domestic situations. It may seem like an easy decision to end the life of a well-known (or highly-suspected) terrorist in a country halfway around the world for multiple reasons: 1) the guy is an enemy combatant, who is not a citizen of America, and therefore does not have the rights of an American citizen, 2) you don’t have to be physically present to witness the gruesome death and/or merciless suffering inflicted on fellow human beings by the act of destruction, and 3) when simplified to the act of pushing a button, it further removes you from the sense that you are committing murder, because after all, it wasn’t you that killed the person, but rather the bomb/robot/drone.

Today, police departments nationwide recognize the moral hazards implicated in assigning a robot to do the job of neutralizing a suspect, and Micah Johnson’s case stands as a one-of-a-kind instance. Johnson’s death is at one end of the spectrum in this regard. The worst-case scenario, on the opposite end of the spectrum, could best be exemplified in the 2014 version of the movie RoboCop, where cybernetic, super-powered officer Alex Murphy has his brain neurotransmitter levels remotely adjusted to make him more or less a sociopathic killing machine, depending on the requirements of his current assignment. The fact that Murphy is still sentient, but his brain chemistry can be manipulated to render him a “robot cop,” makes him the closest thing to Artificial Intelligence that has been imagined for use by any police force. For now, we should be grateful such technology only exists in fiction, and that a firm delineation in definition of American Civilian from Enemy Combatant still exists among our country’s law enforcement officers. In the future, as bomb-delivery (or otherwise “death-machine”) robots become more sophisticated, and as more cases like Micah Johnson’s occur, undoubtedly the moral issue at heart will be raised to the point where legislation must be enacted to assure and clarify the rights of American citizens, and prevent their undue abuse by law enforcement technology.