There is always a possibility of software/ algorithms glitches occurring in any computerized system, including those that use artificial intelligence (AI). These glitches can range in severity from minor issues that do not significantly impact the system’s functionality to more significant problems that can cause the system to malfunction or fail completely. Rightly stated, ‘As a human, if you make a mistake, you just correct it, but in the AI world, if the machine doesn’t learn how to correct their own mistakes, then that creates more problems’.
The likelihood of a glitch occurring can depend on several factors, including the system’s complexity, software quality, and maintenance level. Some of the possible reasons are mentioned below.
In the past also, there have been some prominent cases relating to AI systems not working properly, like Amazon’s Facial Recognition Software Falsely Recognising U.S. Congresspeople, Facebook’s AI system struggling to keep hate content away.
Though it may not be possible to eliminate all the risks of glitches completely, various strategies can be taken to minimize the risk of occurrence and reduce the impact of glitches when they do occur, some of which can be:
It is important to be aware of the potential risks and to take steps to minimize them. However, with the growing number of used cases of AI, the benefits of AI can often outweigh the potential for problems.
There are a wide variety of different viewpoints on how AI should be developed and used, and a number of factors, including ethical concerns, technical considerations, and societal values, can shape these viewpoints. With the ongoing progress in the used cases of AI, it is difficult to predict with certainty what the future entails regarding AI. It may be possible that AI systems
could eventually develop capabilities that surpass those of humans in certain areas. AI has already demonstrated the ability to perform certain tasks more accurately and efficiently than humans. For ex.: AI surpasses humans in tasks like image and speech recognition and certain fields, such as game-playing.
However, AI is still limited by the data and algorithms on which they are trained. They are not capable of completely having independent thought or decision-making, and will result will always be impacted by how the algorithms are trained. AI systems are still far from being able to replicate the general intelligence and creativity of human beings. They lack the ability to think abstractly, understand complex concepts, and improvise. However, some researchers are working on developing AI systems that can achieve a form of artificial general intelligence (AGI), which would be capable of understanding or learning any intellectual task that a human being can. However, it is not clear how close we are to achieving AGI. Additionally, even if AGI were to be achieved, it is not clear that such a system would be capable of self-awareness or would want to surpass human beings.
Thus, AI systems can only function effectively with the knowledge and expertise of human-computer scientists. AI is heavily dependent on human input, and humans do the development of AI systems. Hence, computer scientists will continue to play a critical role in the development, deployment and proper functioning of AI systems and will be responsible for ensuring that these systems are used ethically and responsibly. This may involve establishing clear guidelines and protocols for using AI, as well as ongoing monitoring and evaluation to ensure that the technology is being used in a way that is safe and beneficial to society. However, the extent to which they will ‘keep the hand on’ AI may vary depending on the specific area of
application and the level of autonomy of the AI systems.
While it is unlikely that AI systems will ever become fully self-aware like humans, they could develop capabilities that surpass those of humans in certain areas. However, these will always be tools that help humans perform their tasks efficiently. AI can be used to help computer scientists by automating repetitive tasks, such as bug fixing, testing, and code analysis. AI can also help find patterns and insights that humans might miss and generate new ideas.
Needless to say, with reinforcement learning, several autonomous systems are being built with can operate without human intervention, like the use of AI systems in self-driving cars where the systems can navigate and make decisions on their own. But it is yet to be seen how it will develop more in the coming future.
Overall, AI has the potential to surpass human performance in certain areas, but likely, it will only partially surpass human beings. Humans and AI will likely continue to work together to augment each other’s strengths and overcome each other’s limitations.
1 Michelle Singerman, What it means for humans when AI-powered programs fail, CPA Canada (2018), available at
https://www.cpacanada.ca/en/news/canada/2018-07-31-what-it-means-for-humans-when-ai-powered-programs-fail
2 M Umer Mirza, Five biggest failures of AI, why AI projects fail?, ThinkML(2020); available at https://thinkml.ai/five-biggest-failures-ofai-projects-reason-to-fail/
3 ibid 4 Autonomous weapons that kill must be banned, insists UN chief; available at https://news.un.org/en/story/2019/03/1035381 5 Charles P. Trumbull IV, Autonomous Weapons: How Existing Law Can Regulate Weapons: How Existing Law Can Regulate Future Weapons, 34 Emory International Law Review 2 (2020)
The views in all sections are personal views of the author.