Artificial Intelligence refers to information-processing systems and technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments.
Artificial Intelligence (AI) systems have gained prominence due to their vast potential to unlock economic value and help mitigate social challenges. Thus, not only the development but also adoption of AI has seen a global surge in recent years. The rapid increase in adoption can also be attributed to the strong value proposition of the technology.
Machine Learning (ML) and Deep Learning (DL) are two key techniques or subsets under the umbrella of AI. Machine Learning is based on the creation of algorithms, which are originally created through human intervention with feature engineering (identified with relevant domain knowledge) but learn (and improve i.e. modify themselves) through experience (i.e. multiple iterations of data). By contrast, Deep Learning (DL) algorithms learn or improve themselves through layers of Artificial Neural Networks (ANNs) without extensive human feature engineering. DL algorithms are “black box”algorithms i.e. it is not possible to understand the reason or “why”the algorithms give particular endresults.
What AI fundamentally does is to lower the cost of prediction. Hence, public sector entities have started using AI, especially Machine Learning algorithms, to improve the efficiency of public services delivery at lower costs.
However, the significant risks associated with use of AI in delivery of public services also need to be carefully considered. The White House Report of May 2016 on ‘Big Data: A Report on Algorithmic Systems, Opportunity and Civil Rights’ had observed that the unfairness in AI driven automated decision making arises primarily on account of two different types of challenges:
The European General Data Protection Regulation (GDPR) contains provisions on decision making based solely on automated processing, including profiling. In such cases, data subjects have the right to be provided with meaningful information about the logic involved in the decision. The GDPR also gives individuals the right not to be subject solely to automated decision making, except in certain situations.
Further, in a notable judgement in 2020 by the Hague District Court in the Netherlands about the System Risk Indicators (SYRI) legislation, used to detect various forms of fraud (including social benefits, allowances and taxes fraud), the Court ruled that the SYRI legislation does not strike a fair balance between the benefits new technology bring and the violation of the right to a private life through the use of new technologies, and in that respect, it is insufficiently transparent and verifiable.
Due to these ethical and privacy concerns, there has been extensive recognition and strong advocacy to generate awareness for the use of Responsible AI across governments, businesses and civil society organizations.
Strengthening mechanisms for international collaboration to facilitate responsible use and access to AI systems and technologies is necessary to address the ethical and privacy challenges and systemic risks that arise. For Supreme Audit Institutions (SAIs), which exercise oversight over the policies and actions of governments related to AI, audit of AI is essential to ensure that ethical values and principles which are an integral part of Responsible AI are adhered to, while realizing the full potential benefits of use of AI.
It may be noted in 2020, the SAIs of Finland, Germany, the Netherlands, Norway and the UK brought out a white paper for public auditors titled “Auditing machine learning algorithms”. The white paper identified the main general problems and risks as follows:
India, being one of the fastest-growing economies, has a significant stake in the AI revolution that has taken the world by storm. Recognizing AI’s potential to transform economies and the need for India to strategize its approach to be a part of this change, the government has taken the task of crafting a national strategy for AI. This strategy document shows that India has the strength and characteristics to position itself among leaders on the global AI map. It also focuses on how India can leverage transformative technologies to ensure social and inclusive growth in line with the development philosophy of the government.
It is true that AI has the potential to provide large incremental value to a wide range of sectors and is rightly termed as a transformative technology. However, certain barriers of AI have also been identified that need to be addressed in order to achieve the goals. The barriers analyzed are a) lack of broad-based expertise in research and application of AI, b) absence of enabling data ecosystems, c) high resource cost and low awareness for adoption of AI, d) privacy and security and e) absence of collaborative approach to adoption and application of AI.
The strategy lays emphasis on the fact that as AI-based solutions will permeate the way we live and do business, the question of ethics, privacy and security will emerge. Thus, appropriate handling of data, ensuring privacy and security is of prime importance and suggests establishing data protection frameworks and sectoral regulatory frameworks, and promotion of adoption of international standards.
To foster public trust and confidence in AI technologies and fully realize their potential, G20 has drawn AI Principles with the purpose of maximizing and sharing the benefits from AI, while minimizing the risks and concerns, with special attention to international cooperation.
The objectives of the Engagement Group of SAI20 on Responsible AI are to discuss:
ROLE OF SUPREME AUDIT INSTITUTIONS IN THE AUDIT OF RESPONSIBLE AI Responsible AI and Role of SAIs
During the G20 Ministerial Statement on Trade and Digital Economy in June 2019, a beginning has been made in articulating that the digital society must be built on trust among all stakeholders including governments, civil society, international organizations, academics and businesses through sharing common values and principles including equality, justice, transparency and accountability, taking into account the global economy and interoperability and the adoption of the recommendation on the Ethics of Artificial Intelligence by the General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) in November 2021.
It will become increasingly necessary that SAIs are able audit AI algorithms in both compliance and financial audits. The scope and depth of AI audits may vary, requiring differing levels of technical expertise from the auditors and different levels of access to the underlying technical components. Since AI models tend to be part of the IT infrastructure and services, elements from IT audit approaches may also need to be included, similar to audit of other software/ application development projects.
One of the important driver for effective audit in environments where AI systems have been deployed is availability of well-trained audit professionals having diverse mix of technical and non-technical skills. Therefore, Supreme Audit Institutions need to invest in professional development of their personnel to gain the essential skills required for auditing in AI environments. For a high level audit, auditors need a good understanding of the high level principles of AI algorithms and up-to-date knowledge of the latest developments; however, for a thorough and detailed audit including substantive testing, auditors may need to understand common coding languages and model implementations, and be able to use appropriate software tools.
In addition to audit of AI, SAIs also need to consider and explore the possibility for using AI in their audit processes to make audit more effective and efficient.