Almost Human (?)

1393747_605589589482506_1444804464_n

Photo courtesy of the Almost Human Facebook Page

Next week Fox will première its latest series Almost Human. Set in the not-so-distant future of 2048, the story will follow Detective John Kennex – a cop who lost a partner, leg, and stability in life; and Dorian – the ‘synthetic’ (i.e. android) who’s assigned as John’s new partner, possessing the unfortunate flaw of emotions. Full of special effects, cool gadgets, legitimate actors, and a promising plot, Almost Human has the potential to be truly entertaining and worth watching. But how should we watch this new show?

If the trailer and extended scenes are any indication, Almost Human will dive into serious ethical and existential questions, offering up answers in the process. Therefore, we need to practice wisdom and alertness in how we watch and interact with what the show communicates (this applies to all TV shows, by the way). Three questions immediately stand out that deserve our awareness.  

What does it mean to be human?

The title of the show alone makes this an obvious, and legitimate, question. John is human, but he has a synthetic leg: he is broken. Does his brokenness mean he is less than human? Dorian is an android. He looks human and has emotions. At one point he says to John, “I wasn’t born, but I was made to feel.” Does his ability to feel make him more than an android?

This subtly communicates that John and Dorian are both human because they both experience emotions. This promotes a modern-day form of Gnosticism (the material world is bad; the immaterial world is good). If our humanness does not include our physical bodies, this has huge implications for how we treat our body and the bodies of others. Does this mean those who are in a coma or have an emotional/cognitive disorder are not fully human? If our bodies are not part of our humanity, are we at liberty to do whatever we want to them?

Should there be limits to biotechnology?

The second question raised is the ethical limits of biotechnology. Are there any limits? The technology of the show feels like the stuff of science fiction, yet we’re quickly approaching the day sci-fi becomes reality. Should we set any limits to biotechnology; if so, to what extent? Is it right to hook someone up to a memory machine, have a piece of technology dictate if someone’s life is worth saving, or create programmable DNA? Even closer to reality, should we limit human cloning, brain-chip implants, or genetic engineering? These are questions for a few random ethicists or scientists. They’re questions we will all need to answer today and in the very near future.

What does it mean to have free will?

Finally, this show promises to raise the age-old question of free will. In the trailer Dorian tells John how he is different from the other synthetics (collectively called MX-43, individually given a number). Unlike Dorian, they have no emotions. Rather, they are programmed to follow logic and rules. “They have no free will,” Dorian claims. Now, I’d agree that an android cannot have free will. However, it’s not their android-ness that prompts Dorian’s statement, but their logic and adherence to the rules. Should we assume that using logic and following rules equates the lack of free will? Do we only experience free will in our emotions? The question of free will is complex, which means that Dorian’s reasoning that it only exists in emotions (ironic?) is shaky at best.

Humanness, biotechnology, and free will are three questions Almost Human raises, with many more on the horizon. What are some questions you see based on the trailer or the show itself once it starts?

Thoughts or comments? I'd love to hear them!