Get In Touch
Artificial Intelligence(AI) – open questions


  • Home
  • Blogs
  • Artificial Intelligence(AI) – open questions
Artificial Intelligence(AI) – open questions

Artificial Intelligence(AI) – open questions

a. What is the possibility of glitches in AI models? How the risk of such glitches can be reduced?

There is always a possibility of software/ algorithms glitches occurring in any computerized system, including those that use artificial intelligence (AI). These glitches can range in severity from minor issues that do not significantly impact the system’s functionality to more significant problems that can cause the system to malfunction or fail completely. Rightly stated, ‘As a human, if you make a mistake, you just correct it, but in the AI world, if the machine doesn’t learn how to correct their own mistakes, then that creates more problems’.

The likelihood of a glitch occurring can depend on several factors, including the system’s complexity, software quality, and maintenance level. Some of the possible reasons are mentioned below.

  • Lack of use of the right tool or unavailability of other appropriate resources
  • Quality of data or insufficient data for training algorithms
  • Data engineering quality.
  • In-built algorithmic bias
  • Cyber attacks

In the past also, there have been some prominent cases relating to AI systems not working properly, like Amazon’s Facial Recognition Software Falsely Recognising U.S. Congresspeople, Facebook’s AI system struggling to keep hate content away.

Though it may not be possible to eliminate all the risks of glitches completely, various strategies can be taken to minimize the risk of occurrence and reduce the impact of glitches when they do occur, some of which can be:

  • Adoption of robust best practices for software development: These can include software testing and debugging and conducting code reviews/ audits to identify potential issues. Using version control tools can help identify when and where a glitch was introduced.
  • Regular maintenance and system updating: Time-to-time maintenance and updating are essential for the machine’s smooth running. It also enables to check whether the system is working as expected on different used cases and fixing the identified bugs and vulnerabilities.
  • Proper strategy in place for addressing glitches: Some of these strategies could include the availability of dedicated resources to handle the identified issues.
  • Regular performance monitoring: This can help identify the potential issue impacting the machine’s performance and enhance the machine’s performance.

It is important to be aware of the potential risks and to take steps to minimize them. However, with the growing number of used cases of AI, the benefits of AI can often outweigh the potential for problems.

b. Do you think that computer scientists will always keep a hand on artificial intelligence? Or could it become self-aware at some point – and perhaps attempt to surpass human beings?

There are a wide variety of different viewpoints on how AI should be developed and used, and a number of factors, including ethical concerns, technical considerations, and societal values, can shape these viewpoints. With the ongoing progress in the used cases of AI, it is difficult to predict with certainty what the future entails regarding AI. It may be possible that AI systems
could eventually develop capabilities that surpass those of humans in certain areas. AI has already demonstrated the ability to perform certain tasks more accurately and efficiently than humans. For ex.: AI surpasses humans in tasks like image and speech recognition and certain fields, such as game-playing.

However, AI is still limited by the data and algorithms on which they are trained. They are not capable of completely having independent thought or decision-making, and will result will always be impacted by how the algorithms are trained. AI systems are still far from being able to replicate the general intelligence and creativity of human beings. They lack the ability to think abstractly, understand complex concepts, and improvise. However, some researchers are working on developing AI systems that can achieve a form of artificial general intelligence (AGI), which would be capable of understanding or learning any intellectual task that a human being can. However, it is not clear how close we are to achieving AGI. Additionally, even if AGI were to be achieved, it is not clear that such a system would be capable of self-awareness or would want to surpass human beings.

Thus, AI systems can only function effectively with the knowledge and expertise of human-computer scientists. AI is heavily dependent on human input, and humans do the development of AI systems. Hence, computer scientists will continue to play a critical role in the development, deployment and proper functioning of AI systems and will be responsible for ensuring that these systems are used ethically and responsibly. This may involve establishing clear guidelines and protocols for using AI, as well as ongoing monitoring and evaluation to ensure that the technology is being used in a way that is safe and beneficial to society. However, the extent to which they will ‘keep the hand on’ AI may vary depending on the specific area of
application and the level of autonomy of the AI systems.

  • Human oversight will be required for complex systems: AI systems that are used in critical applications, such as healthcare, finance, and transportation, will require human oversight to ensure safety and to make important decisions.
  • Human expertise will be needed to improve the performance of AI systems: As AI technology advances, computer scientists will continue to be involved in research and development to improve the performance of AI systems.
  • Human intervention may be needed to address ethical and societal implications: As AI becomes more integrated into society, computer scientists may be involved in addressing ethical and societal implications, such as transparency, accountability, and bias. They will be responsible for ensuring that the systems are transparent, that their decision-making processes are explainable, and that they do not perpetuate existing biases.

While it is unlikely that AI systems will ever become fully self-aware like humans, they could develop capabilities that surpass those of humans in certain areas. However, these will always be tools that help humans perform their tasks efficiently. AI can be used to help computer scientists by automating repetitive tasks, such as bug fixing, testing, and code analysis. AI can also help find patterns and insights that humans might miss and generate new ideas.

Needless to say, with reinforcement learning, several autonomous systems are being built with can operate without human intervention, like the use of AI systems in self-driving cars where the systems can navigate and make decisions on their own. But it is yet to be seen how it will develop more in the coming future.

Overall, AI has the potential to surpass human performance in certain areas, but likely, it will only partially surpass human beings. Humans and AI will likely continue to work together to augment each other’s strengths and overcome each other’s limitations.

1 Michelle Singerman, What it means for humans when AI-powered programs fail, CPA Canada (2018), available at
2 M Umer Mirza, Five biggest failures of AI, why AI projects fail?, ThinkML(2020); available at
3 ibid 4 Autonomous weapons that kill must be banned, insists UN chief; available at 5 Charles P. Trumbull IV, Autonomous Weapons: How Existing Law Can Regulate Weapons: How Existing Law Can Regulate Future Weapons, 34 Emory International Law Review 2 (2020)

The views in all sections are personal views of the author.

How would you rate your experience?
Do you have any additional comment?
Enter your email if you'd like us to contact you regarding with your feedback.
Thank you for submitting your feedback!