Despite my love of writing and my history in the music industry, I am not really a lover of the spotlight. Like most people, I find listening to and seeing myself quite awkward; when did I get that old? Why does my mouth do that when I talk? Recently, however, I have found a solution: artificial intelligence has gained prominence in my life in several ways. One of these is the creation of my “AI Twin,” which represents my thoughts on our social media platforms. This has alleviated the burden of my “stage fright,” as I can type out my thoughts and my twin can present them to the world. I am also using the power of AI to grammatically check this article, and as someone who regularly writes content and reports and desperately needs a personal assistant, this has been an absolute boon in my life.
However, with this relief and crucial assistance comes a certain amount of scepticism. It’s only fair to consider that technology is advancing too quickly for red tape and the necessary governance to keep pace. But what happens if the lack of regulation leads to serious consequences? Could we find ourselves in a situation where the absence of oversight results in crime-scene tape around the very technologies designed to enhance our lives?
Having been involved in the QA industry for a couple of decades now, I strive to stay abreast of the latest technologies and trends. It is becoming increasingly difficult to stay on top of everything given the rate with which things are advancing. It’s difficult to even consider how to test something where you don’t even know what the expected outcome is. The advent of AI has brought incredible opportunities but also significant risks, especially if testing and governance are not adequately addressed. But how do we truly address this if we are unable to keep up with the rate that technology is advancing and learning?
It is my opinion that one of the fundamental risks when implementing AI is the concept that AI doesn’t require human intervention, that AI can replace a workforce. In a recent conversation with one of our clients, their development department was drastically downsizing because they had decided to implement an AI development tool. I fear that this approach is very short sighted, I believe an AI implementation still requires human intervention to ensure the right requirements for the AI, to develop and train the machine learning models and then, most importantly, to test that the output of that AI does not propagate our incredibly human mistakes.
One of the main risks of not properly testing AI systems is the potential for biased outcomes. If AI models are trained on incomplete or biased data, they may perpetuate or even exacerbate existing societal biases, leading to the unfair treatment of individuals or groups.
In addition, without proper governance, AI systems may be allowed to operate in ways that are not transparent or accountable. This lack of clarity can be detrimental, especially in high-stakes applications such as healthcare or criminal justice, where decisions made by AI can greatly impact people's lives. If an AI system makes an erroneous recommendation for a medical treatment or misidentifies a potential suspect due to flawed algorithms, the consequences can be severe and irreversible
Moreover, the rapid deployment of AI without adequate oversight can lead to ethical dilemmas. Companies may prioritize profit over people, rolling out AI solutions without considering the broader implications of their use. For instance, surveillance technologies powered by AI can infringe on privacy rights, leading to a society where individuals are constantly monitored—a scenario that raises ethical concerns about personal freedoms. Interestingly enough, we are already experiencing politicians using AI as an excuse for bad behaviour and getting away with it because of the lack of oversight and transparency in most current AI systems. The “it wasn’t me, it was an AI rendering of me” excuse is becoming far more frequent these days.
AI offers me, and our company, remarkable assistance and opportunities for innovation. However, being in QA, I realise that the risks associated with inadequate testing and governance cannot be overlooked. Very few of the QAs I know and am working with in the industry are trained on how to test Artificial Intelligence. In fact, very few of the QAs I know are being included in the rolling-out of AI technologies within the companies they are working at. It is vital that we prioritise our learnings to include learning how to test these incredibly complex technologies, we must advocate for robust regulatory frameworks that ensure AI technologies are developed and deployed safely, ethically, and transparently. By doing so, we can harness the full potential of AI while safeguarding against its risks.
To navigate this delicate landscape, we must embrace a little more red tape—essential regulations and checks that can guide our approach to testing and implementing AI. This proactive stance will help us prevent scenarios that need crime scene tape, where the aftermath of negligence and oversight leads to chaos and harm. It’s essential that this amazing technology serves society and upholds our values rather than undermines them.
Kommentare