We all recognize the value of AI ethics but how do we implement AI ethics? The challenge is to create a solution that conforms to the ethical standards but also should be implementable and scalable
Most importantly, the framework should start with values first and then point a way to implement the code
The Understanding artificial intelligence ethics and safety by the Alan Turing Institute offers a pragmatic values-based approach to implementing AI ethics. The full document can be downloaded below. In this article, I provide an overview of a possible approach because the framework itself is quite long
In essence, three building blocks are proposed for creating a responsible AI project delivery ecosystem
- At the most basic level, you have a framework of ethical values called the SUM values (Support, Underwrite, and Motivate) which creates a responsible data design and use ecosystem. The objectives of these SUM Values are (1) to provide you with an accessible framework to start thinking about the moral scope of the societal and ethical impacts of your project and (2) to establish well-defined criteria to evaluate its ethical permissibility.
- The second level is the level of actionable principles for responsible design and use of AI systems. These will be called FAST Track Principles. The objectives of these FAST Track Principles are to provide you with the moral and practical tools (1) to make sure that your project is bias-mitigating, non-discriminatory, and fair, and (2) to safeguard public trust in your project’s capacity to deliver safe and reliable AI innovation.
- At the third, we have a process-based governance framework (PBG Framework) that operationalises the SUM Values and the FAST Track Principles across the entire AI project delivery workflow. The objective of this PBG Framework is to set up transparent processes of design and implementation that safeguard and enable the justifiability of both your AI project and its product.
They are visualized as below
You can visualize the FAST track principles as below:
At this point, through the FAST principles, you can implement these values through the transparency process. Through transparency mechanisms, designers and implementers should be able to explain to all stakeholders in everyday (non-technical and understandable) language how and why a model performed in a way it did in a specific context. We should also be able to explain both the outcome and the process behind the design and use of the algorithms.
Now comes the implementation via transparent AI
There are two ways to define transparency – either (1) the quality an object has when one can see clearly through it or (2) the quality of a situation or process that can be clearly justified and explained because it is open to inspection and free from secrets. Transparency as a principle of AI ethics encompasses both of these meanings
In the next article, we will cover the technical aspects of choosing, designing, and using an interpretable AI system