LW927

“Sentient Code”

Last month saw the return of ‘Black Mirror’, a British science fiction television series, created by Charlie Brooker. The much-anticipated fourth series continued its theme of how current technological advancements may progress in near distant realities, and the possible implications they may have on civil societies.

 

One re-occurring advancement, which is seen throughout the anthology series, is the creation of ‘human cookies’: a digital copy of human consciousness. Brooker first introduced this concept in the 2014 Christmas special, ‘White Christmas’, where it was revealed that technology had developed a blank chip called a ‘cookie’ that could be implanted with the purpose of absorbing and copying human consciousness. Once removed, the cookie could then be transferred into a hub, to be used as the software for a ‘smart home’. The idea follows that the chip would absorb the person’s preferences: for example, their preferred temperature in their home or the time they would like to wake up, and effectively work as a personal assistant to their original host. This digital copy is represented as a sentient consciousness, capable of independent thought, and in this case terror over its existence.

 

This idea is seen again in two episodes of the fourth season: where digital copies, or ‘sentient code’ is used as a player in a video game, a way to extend the ‘life’ of a comatose patient, and to create an authentic hologram of a convicted killer for a tourist attraction. By the season finale it is revealed that the UN had made it illegal to not only delete or erase a copy, but to also transfer human consciousness into limited formats. The copies need to be able to express at least five emotions for it to be considered humane, suggesting that digital copies have been afforded different levels of legal protection.

 

While the technical possibilities of the creation of sentient code, and their corresponding legal protections are far from reach, their representation within the series certainly raises ethical questions concerning the present day creation and treatment of emerging Artificial Intelligence (AI) technologies. If one is to create a mirror image of humanity, encompassing key traits, which distinguish humans from other beings, should they warrant similar protections within a human rights framework? There are a few things to ponder here.
If one were to realistically consider the human rights framework being extended to AI, this would mean viewing this technology as something more than pure machinery. When considering ‘Posthuman Rights’, writer Woody Evans asks: “if a thing exists, does it have the right to continue to exist, and would such right hinge on it’s being more than a thing”. Perhaps evidence for such a view can be found within the application of such technology in the fields of healthcare, law enforcement and public service administration; such fields of profession which arguably require authentic human emotions, such as empathy. If humanity programs AI and equips the technology with a set of mirrored traits to enable integration with humanity in this way, does this make the machinery more than a thing?

 

But embracing AI as a subject within a human rights framework is only one way of looking at the situation. For example Secretary General of Amnesty International, Salil Shetty, spoke at the AI for Good Global Summit held in Geneva, where he considered the human rights impact the emergence of AI could have on the global community. He supports “a future where AI is a technology where human rights is a core design and use principle” and to make his case, he put forward two alternate ways in which their integration could affect humanity.

In one scenario, the use of AI and mass automation could be used to work towards reducing the inequality we see around the world today. He argues that Governments and Companies could support automation that takes workers out of “dangerous and degrading jobs” and implement both educational and economic policies to create “opportunities for dignified and fulfilling jobs”.

 

But he warns that it is the responsibility of both Governments and Companies to integrate ethical considerations into their policies, and warns that if we continue down our current path we could see ourselves in a society where worker’s rights continue to be precarious, but hundreds of millions of jobs could be lost to automation. Furthermore he warns, “AI systems may become the gatekeepers deciding who can access healthcare and who cannot, who qualifies for a job or a mortgage and who does not”. While this may seem like an excellent premise for season 5 of black mirror, its suggests that not only are humanity considerations imperative in the creation of AI policies and codes of practice going forward, but also that such realities as those seen in science fiction may not be so far out of reach.

Standard

Leave a Reply