Robots and other automated technologies are swiftly becoming a familiar aspect of our contemporary landscape. No longer purely the subject of sci-fi media, aspects of artificial intelligence (AI) are increasingly accessible and practical in many industries. Development of these tools is continuing at a rapid pace, with experts predicting widespread dependence on networked AI by 2030.

While this is making our lives easier, it’s also important to consider how our relationships with more advanced robots and software will change. Indeed, as they gain greater independent intelligence and exercise free will, it’s vital to treat these intelligences as something more than machines. If they can freely think, act, and potentially emote, we must seriously assess protecting them with a robotic bill of rights.
Let’s take a closer look at this issue. What elements make a bill of rights necessary? How might this affect our interactions and the continued development of these technologies?
Switching Off
At its best, a bill of rights seeks to assert and facilitate equality among the population. Yet, we often treat machines as tools we can switch on and off at will. Many of us are familiar with fictional intelligences like HAL 9000 (2001: A Space Odyssey) fighting against the humans who want to shut them down. The replicants of Blade Runner are forcibly “retired”. At what point does this behavior breach basic rights?
A good example of this is the advances in the legal profession. Technology is changing how the law is practiced through the utilization of virtual criminal hearings and AI analysis of legal documentation. At the moment, this mostly impacts how professionals use and train with tools. But in the future, it may include systems that autonomously handle court cases and even law enforcement itself. If a machine issues the wrong sentence or injures a human in the course of their duties, do we have a right to destroy it any more than we would a human police officer?
Indeed we may find similar issues in medical scenarios. We are starting to see the introduction of robotic surgeons. In the near future, these procedures may be run entirely by AI. At the moment if a doctor makes a mistake on the job we have medical negligence procedures in place. But if a robotic doctor made a fatal error, is it fair to wipe their drive and recycle their technology?
It may well come down to establishing at what point machine intelligence has a right to existence. What level of autonomy does it reach before it can be considered an independent lifeform and to the same right to life as its human counterparts?
Considering Consent
Consent is likely to feature heavily in any robotic bill of rights formed. We may reach the point at which machine intelligence is capable of independent thought and programming routines resembling emotions. They, therefore, surely have a right to give or deny consent for certain activities.
Part of the issue here will be the refusal to perform actions these machines were designed for. In clinical research, there is likely to be a time in which we develop robots and machine intelligences to closely mimic human bodies and minds to test the efficacy of new treatments. Yet, alongside the medical goals of any clinical research, there also has to be a high standard of ethics in place. Currently, this includes the Nuremberg Code, which prevents exploitative clinical research practices.
It’s worth considering how forcing an artificial intelligence to participate in potentially painful and damaging research against the will of the software could constitute a form of exploitation. We can also extend this further. Companies are developing robots and virtual reality programs designed for human sexual gratification, do they have a right to revoke consent? The movie Ex Machina and the TV series Westworld have both depicted how humans consider a robot’s inhumanity to be carte blanche to insist on sexual contact.
Between this and the increasing number of military robots being sent into warzones, it is imperative we establish guidelines for their right to refusal. If we accept machines are capable of independent choice, part of their rights must be to choose to say no.
Empowering Growth
One of the main areas of consideration in respect of robots and AI is the potential for autonomous growth. We’re used to the limited ways in which AI features in our daily lives and are generally keen to see where this can be improved to make our work and home duties easier. But what happens when we’ve produced technologies advanced enough to have their own ambitions? Do they have the right to expand their growth beyond their service to us? Are they free to follow their dreams?
This issue has frequently been tackled in movies and TV. The character of Data in Star Trek: The Next Generation is frequently portrayed as learning to become more human and pursue growth in his emotions, hobbies, and interpersonal relationships. Indeed, the second season episode The Measure of a Man explicitly discussed these growth attributes as proof of sentience and how this relates to Data’s rights to free will.
When drafting a robot bill of rights, consideration must be given to the fact that by encouraging machine learning growth it is not necessarily ethical to rescind this invitation to keep learning. At a certain point, robots will gain agency for personal change by choice. While we may need to protect ourselves from the negative directions this change may take, we may not have the right to hold these burgeoning sentient beings back.
Conclusion
AI and robotics are increasingly familiar aspects of our digital landscape. Yet, as development becomes more sophisticated, we are approaching the creation of robots that have some semblance of sentience. As with any independently-thinking being, we have to start considering what elements should form part of a bill of rights that protects them. These machines may take artificial forms but they are nonetheless examples of life. It’s a complex matter and one we need to take seriously now before we commit acts we later recognize to be ethical and civil breaches.
Leave a Reply