• Home > Theme>
  • Responsible Artificial Intelligence (AI)

Responsible Artificial Intelligence (AI)

PRIORITY-2: RESPONSIBLE ARTIFICIAL INTELLIGENCE

A. RATIONALE
AI: GLOBAL PERSPECTIVE

Artificial Intelligence refers to information-processing systems and technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments.

Artificial Intelligence (AI) systems have gained prominence due to their vast potential to unlock economic value and help mitigate social challenges. Thus, not only the development but also adoption of AI has seen a global surge in recent years. The rapid increase in adoption can also be attributed to the strong value proposition of the technology.

Machine Learning (ML) and Deep Learning (DL) are two key techniques or subsets under the umbrella of AI. Machine Learning is based on the creation of algorithms, which are originally created through human intervention with feature engineering (identified with relevant domain knowledge) but learn (and improve i.e. modify themselves) through experience (i.e. multiple iterations of data). By contrast, Deep Learning (DL) algorithms learn or improve themselves through layers of Artificial Neural Networks (ANNs) without extensive human feature engineering. DL algorithms are “black box”algorithms i.e. it is not possible to understand the reason or “why”the algorithms give particular endresults.

What AI fundamentally does is to lower the cost of prediction. Hence, public sector entities have started using AI, especially Machine Learning algorithms, to improve the efficiency of public services delivery at lower costs.

However, the significant risks associated with use of AI in delivery of public services also need to be carefully considered. The White House Report of May 2016 on ‘Big Data: A Report on Algorithmic Systems, Opportunity and Civil Rights’ had observed that the unfairness in AI driven automated decision making arises primarily on account of two different types of challenges:

  • Challenges relating to data used as inputs to an algorithm – a bias in the historical data taints the future decisions/ predictions, and
  • Challenges related to the inner workings of the algorithms itself – the black-box nature of the algorithms.

The European General Data Protection Regulation (GDPR) contains provisions on decision making based solely on automated processing, including profiling. In such cases, data subjects have the right to be provided with meaningful information about the logic involved in the decision. The GDPR also gives individuals the right not to be subject solely to automated decision making, except in certain situations.

Further, in a notable judgement in 2020 by the Hague District Court in the Netherlands about the System Risk Indicators (SYRI) legislation, used to detect various forms of fraud (including social benefits, allowances and taxes fraud), the Court ruled that the SYRI legislation does not strike a fair balance between the benefits new technology bring and the violation of the right to a private life through the use of new technologies, and in that respect, it is insufficiently transparent and verifiable.

Due to these ethical and privacy concerns, there has been extensive recognition and strong advocacy to generate awareness for the use of Responsible AI across governments, businesses and civil society organizations.

Strengthening mechanisms for international collaboration to facilitate responsible use and access to AI systems and technologies is necessary to address the ethical and privacy challenges and systemic risks that arise. For Supreme Audit Institutions (SAIs), which exercise oversight over the policies and actions of governments related to AI, audit of AI is essential to ensure that ethical values and principles which are an integral part of Responsible AI are adhered to, while realizing the full potential benefits of use of AI.

It may be noted in 2020, the SAIs of Finland, Germany, the Netherlands, Norway and the UK brought out a white paper for public auditors titled “Auditing machine learning algorithms”. The white paper identified the main general problems and risks as follows:

  • Developers of ML algorithms will often focus on optimising specific numeric performance metrics, resulting in a high risk that requirements of compliance, transparency and fairness are neglected.
  • Product owners within the auditee organisation might not communicate their requirements well to ML developers. Further, auditee organisations often lack the resources and competence to develop ML applications internally and thus rely on consultants or procure ready-made solutions, which increases the risk of using ML without the understanding necessary for both ML-based production/ maintenance and compliance requirements
  • There is significant uncertainty among public sector entities about the use of personal data in ML models; organisational regulatory structures are not necessarily in place and accountability needs to be clarified.
  • The white paper concluded that SAIs should be able to audit ML-based AI applications and assess whether use of ML contributes to effective and efficient public services, in compliance with relevant rules and regulations. Further, ML auditors require special knowledge and skills, and SAIs should build up the competence of their auditors.
  • In addition to audit of AI, SAIs may also need to carefully consider and explore the scope for using AI in the audit process. Application of AI could deliver substantial value through increased efficiency and effectiveness of audit by leveraging on the past data relating to cognitive work and judgement of the auditors. Many large private sector audit firms are applying AI in a diverse range of audit activities such as audit planning, risk assessments, tests of transactions, analytics and the preparation of audit working papers.
AI: INDIAN PERSPECTIVE

India, being one of the fastest-growing economies, has a significant stake in the AI revolution that has taken the world by storm. Recognizing AI’s potential to transform economies and the need for India to strategize its approach to be a part of this change, the government has taken the task of crafting a national strategy for AI. This strategy document shows that India has the strength and characteristics to position itself among leaders on the global AI map. It also focuses on how India can leverage transformative technologies to ensure social and inclusive growth in line with the development philosophy of the government.

It is true that AI has the potential to provide large incremental value to a wide range of sectors and is rightly termed as a transformative technology. However, certain barriers of AI have also been identified that need to be addressed in order to achieve the goals. The barriers analyzed are a) lack of broad-based expertise in research and application of AI, b) absence of enabling data ecosystems, c) high resource cost and low awareness for adoption of AI, d) privacy and security and e) absence of collaborative approach to adoption and application of AI.

The strategy lays emphasis on the fact that as AI-based solutions will permeate the way we live and do business, the question of ethics, privacy and security will emerge. Thus, appropriate handling of data, ensuring privacy and security is of prime importance and suggests establishing data protection frameworks and sectoral regulatory frameworks, and promotion of adoption of international standards.

B. OBJECTIVES

To foster public trust and confidence in AI technologies and fully realize their potential, G20 has drawn AI Principles with the purpose of maximizing and sharing the benefits from AI, while minimizing the risks and concerns, with special attention to international cooperation.

The objectives of the Engagement Group of SAI20 on Responsible AI are to discuss:

  1. Governance issues- fairness, transparency, accountability, data privacy and security, human rights and safety- to be examined during the audit of AI systems.
  2. Performance issues- economy in terms of reduced costs, efficiency in terms of productivity gains, effectiveness in terms of achievement of intended objectives- to be examined during the audit of AI systems.
  3. Leveraging AI for more effective and efficient audit, through its use in different stages of the audit process.
  4. Mechanisms for capacity development and knowledge sharing across SAIs, related to audit in environments with extensive use of AI and for application of AI in audit.
C. Themes

ROLE OF SUPREME AUDIT INSTITUTIONS IN THE AUDIT OF RESPONSIBLE AI Responsible AI and Role of SAIs

During the G20 Ministerial Statement on Trade and Digital Economy in June 2019, a beginning has been made in articulating that the digital society must be built on trust among all stakeholders including governments, civil society, international organizations, academics and businesses through sharing common values and principles including equality, justice, transparency and accountability, taking into account the global economy and interoperability and the adoption of the recommendation on the Ethics of Artificial Intelligence by the General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) in November 2021.

It will become increasingly necessary that SAIs are able audit AI algorithms in both compliance and financial audits. The scope and depth of AI audits may vary, requiring differing levels of technical expertise from the auditors and different levels of access to the underlying technical components. Since AI models tend to be part of the IT infrastructure and services, elements from IT audit approaches may also need to be included, similar to audit of other software/ application development projects.

One of the important driver for effective audit in environments where AI systems have been deployed is availability of well-trained audit professionals having diverse mix of technical and non-technical skills. Therefore, Supreme Audit Institutions need to invest in professional development of their personnel to gain the essential skills required for auditing in AI environments. For a high level audit, auditors need a good understanding of the high level principles of AI algorithms and up-to-date knowledge of the latest developments; however, for a thorough and detailed audit including substantive testing, auditors may need to understand common coding languages and model implementations, and be able to use appropriate software tools.

In addition to audit of AI, SAIs also need to consider and explore the possibility for using AI in their audit processes to make audit more effective and efficient.

Questions for Discussion
  1. How can SAI20 plan audit of AI systems from the matrices of fairness, transparency, accountability, data privacy and compliance?
  2. Which are the public policy areas which can be prioritized in auditing AI systems used in those areas?
  3. How can SAI20 encourage use of AI in audit processes to make audit more effective and efficient?
  4. How SAI 20 can cooperate in building capacity for auditing responsible AI?
Expected Outcomes and Key Deliverables
  • Identifying of public policy areas which require prioritization in terms of auditing the AI systems and analysing their risks and impacts.
  • Guidance framework for auditors for examination of AI systems from the perspective of upholding values and principles, on which there is broad consensus.
logo

We are experienced professionals who understand that It services is changing, and are true partners who care about your future business success.

Our Location
  • 2307 Beverley Rd Brooklyn, New York 11226 United States.
Quick Contact
  • Email: Mintech@7oroof.com
  • Support: Mintech@7oroof.com
Opening Hours
  • Monday - Friday
  • 8 am to 7 pm