Home » Uncategorized

Could an AI ethics audit end up like GDPR?

9776441463

In continuing my trend for AI ethics from the last post, Spinoza: Building an AI Ethics Framework From First Principles, I found some early research about AI ethics audit.

As proposed by the paper, Ethics‑Based Auditing to Develop Trustworthy AI, the idea itself is a bit

vague where the idea of AI ethics auditing is defined as “a governance mechanism that can be used by organizations that design and deploy AI systems to control or influence the behavior of AI systems.”

So, the proposal is to audit the behavior of the organization as a structured process by which an entity’s behavior is assessed for consistency with relevant principles or norms.

However, defining the norms is the challenge.

In a nutshell, the audits focus on the rationale behind the decision, code audits entail reviewing the source code, and impact audits investigate the effects of an algorithm’s outputs. According to the paper, by promoting procedural regularity and strengthening institutional trust, ethics-based auditing can help by providing a consistent set of criteria for the following outcomes for AI and ethics:

  • Provide decision-making support by visualizing and monitoring outcomes
  • Inform individuals why a decision was reached and how to contest it
  • Allow for a sector-specific approach to AI governance
  • Relieve human suffering by anticipating and mitigating harms
  • Allocate accountability by tapping into existing governance structures
  • Balance conflicts of interest, e.g. by containing access to sensitive information to an authorized third party.

The process of ethics-based auditing should be continuous, holistic, dialectic, strategic and design-driven. Hence, the audits need to be continuously monitored and the output must be evaluated. The holistic impact of AI, including alternatives to AI decision-making, should be taken into account. The ethics framework should ensure that the right questions are asked and the process should ensure that the default actions are the right ones from an ethical standpoint. Finally, trustworthy AI is about design. Hence, interpretability and robustness should be built into systems from the start. Ethics-based auditing support this aim by providing active feedback to the continuous (re-)design process.

The paper Ethics-based auditing of automated decision-making systems: interve… gives additional considerations for automated decision-making systems (ADMS)

  • Value and vision statement: Whether value and vision statements are publicly communicated How the behavior of ADMS reflects value and vision statements
  • Principles and codes of conduct: How principles and codes of conduct are translated into organizational practices
  • Ethics boards and review committees: What are the pathways through which ethical issues can be escalated and tensions managed
  • Stakeholder consultation: What is the perceived impact of ADMS on decision-subjects and their environment
  • Employee education and training: Whether ethical considerations are regarded in training programmes
  • What tools and methods do employees have at their disposal to aid ethical analysis and reasoning
  • Performance criteria and incentives: What types of behaviour do existing reward structures incentivise How well performance criteria support stated values and visions
  • Reporting channels: How to provide avenues for whistleblowing that enables organisational learning
  • Product development: Which trade-offs have been made in the design phase and why
  • What ethical risks are associated with intended and unintended uses of the ADMS
  • Product deployment and redesign: Whether and how the ADMS has been piloted prior to deployment
  • How system monitoring and stakeholder consultation can inform the continuous redesign of ADMS
  • Periodic audits Whether periodic audits account for and review the ethical behaviour of organisations and ADMS
  • How decisions and processes are documented and communicated for transparency and traceability
  • Monitoring of outputs Which mechanisms can alert or intervene if the outputs of ADMS transgress given tolerance spans

On the face of it, all this sounds like a good idea – but could it end up enriching large consulting companies without providing real value?

GDPR comes to mind i.e. the intentions are good – but there are really very few benefits that we, as customers, see for such regulation.

Nevertheless, an AI ethics audit may well be on the way in the future and if it does appear we are getting a glimpse of what it could look like

 Image source pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *