Healthcare Tool Turned Dystopian Technology: Questioning AI Ethics After Chemical Weapons' Creation

Healthcare Tool Turned Dystopian Technology: Questioning AI Ethics After Chemical Weapons' Creation
A demonstration with drug design software shows the ease with which toxic molecules can be generated. Source.

By Ahana Mandal

6 hours. 

That was all the time an AI program needed to create thousands of molecules mimicking potent toxins like VX, a compound that is linked with biological and chemical warfare. And that was 3 years ago, back in 2021. In essentially less time than we attend school per day, an AI program was able to list not only known compounds used in chemical warfare but also create new compounds that were predicted to be even more toxic than VX. This is a concerning result considering that VX, a nerve agent, is known to be one of the most toxic compounds in the world. 

However, this tool wasn’t originally made to create new chemical weapons. The tool was initially programmed to make a list of potential treatments for dangerous diseases that had the least side effects or were the least toxic. The researchers working on this tool, from Collaborations Pharma, a pharmaceutical company working to find new drugs, were called to the Speiz Convergence conference, where scientists like them were asked to find potential threats from their research. 

According to Fabio Urbina, a senior scientist of the Collaborations Pharma team, she and her team were initially stumped on how their technology and research could be manipulated for harmful purposes when the tool was programmed only to give potential cures for diseases. However, only five minutes into their conversation was when Urbina recalls they “realised that all we had to do is basically flip a little inequality symbol in our code – instead of giving a low score to the molecule for high predicted toxicity, we give it a high score for predictive toxicity” (Durrani). All it took for this generative learning program to go from creating potentially life-saving medication to creating new bioweapons was one character change in the code. 

This discovery brought attention to the lack of consideration regarding ethics with the use of AI in chemical engineering. While in other fields utilizing artificial intelligence to create for example language models, have discussions of over one page long describing potential misuse of their technology, there is no such documentation or regulation in the chemical industry because the implementation of AI for research is so new. 

It’s not just the chemical and drug industries that don’t consider ethical implications when it comes to their use of AI. Many other industries are using machine learning tools to make work more efficient, whether in the tech sector, the music industry or even healthcare. And just like Collaborations Pharma, these industries didn’t consider ethics in their use of AI, leading to big-time consequences. According to CBS News, recent lawsuits against United Health, one of the largest healthcare providers in the United States, were made, claiming that United had used faulty AI with a “90% error rate, overriding determinations made by the patients' physicians that the expenses were medically necessary.” This forced many to make the difficult choice between paying out of pocket for expensive treatment and potentially going into debt or dying because they can’t afford the treatment. While this article was written a year ago, it has been brought to attention once again with the murder of United’s CEO Brian Thompson, causing a stark polarization in public reaction. Many on social media had chosen to support Luigi Mangione, the suspect accused of killing Thompson due to anger over the greedy practices of healthcare companies, which include the use of artificial intelligence to decide whether to approve a patient’s claim or to deny it. 

There’s no denying that artificial intelligence has made human progress faster than ever before, speeding up the time it takes for revolutionary medicines to be created or technology to develop to land in our hands. Many employers, in fact, want employees who are experienced with using AI in their work — 66% of company and business leaders do not want to hire a potential employee with no AI skills. Yet with the utilization of artificial intelligence and machine learning, lines must be established concerning the potential ethical consequences of new tools using generative learning. 

AI by itself is not inherently “good” or “evil”. It’s because of the threat of the lack of responsibility from some to do the right thing that ethics needs to be heavily considered with new technology like machine learning so that all potential consequences are known before sharing research and tools with the world. 

Livelihoods and lives are at stake, and everyone needs to consider the implications of AI and the morality behind the technology used so that no one suffers because of a costly disregard for basic human ethics.