Continuing our series of two posts on the topic of Machine Learning Ethics.
In part two of his series on this topic, guest blogger Francesco Corea, moves on to consider moral accountability. How might an accountability, to act in an ethical way, be codified into algorithms or complete machine learning products?
Drawing on sources as diverse as the World Economic Forum & MIT Technology Review, Francesco guides us through some of the issues. he is pragmatic about where there are not yet easy answers, but also helps us to consider the challenge of liability.
Back to Francesco to conclude his two-part series on Machine Learning Ethics…
How to design machines with ethically significant behaviours
The alignment problem, presented in part one, is also known as ‘King Midas problem’. It arises from the idea that no matter how we tune our algorithms, to achieve a specific objective, we are not able to specify and frame those objectives well enough to prevent the machines pursuing undesirable ways to reach them. Of course, a theoretically viable solution would be to let the machine maximise for our true objective without setting it ex-ante. Thus, making the algorithm free to observe us and understand what we really want (as a species and not as individuals). This might entail the possibility of switching itself off if needed.
Sounds too good to be true? Well, maybe it is. I agree with Nicholas Davis and Thomas Philbeck from The World Economic Forum. They wrote in the Global Risks Report 2017:
“There are complications: humans are irrational, inconsistent, weak-willed, computationally limited and heterogeneous, all of which conspire to make learning about human values from human behaviour a difficult (and perhaps not totally desirable) enterprise”.
Not all problems (or AI) are created equal
What the previous section implicitly suggested is that not all AI applications are the same. Error rates apply differently to different industries. Under this assumption, it might be hard to draw a line and design an accountability framework that does not penalise applications with weak impact (e.g. a recommendation engine). Plus, at the same time does not underestimate the impact of other applications (e.g. healthcare or autonomous vehicles).
To address this. We might end up designing multiple accountability frameworks; to justify algorithmic decision-making and mitigate negative biases.
Certainly, the most straightforward solution, to understand who owns the liability for a certain AI tool, is thinking about the following threefold classification:
- We should hold the AI system itself as responsible for any misbehaviour (does that make any sense?);
- We should hold the designers of the AI as responsible for the malfunctioning and bad outcome (but it might be hard, because AI teams often include hundreds of people and this preventative measure could discourage many from entering the field);
- We should hold accountable the organization running the system (to me it sounds the most reasonable between the three options, but I am not sure about the implications of it. And then what company should be liable in the AI value chain? The final provider? The company who built the system in the first place? The consulting business which recommended it?).
Different ways to look at accountability for Machine Learning Ethics
There is not an easy answer. Much more work is required to tackle this issue. But, I believe a good starting point has been provided by Sorelle Friedler and Nicholas Diakopoulos (in MIT Technology Review). They suggest considering accountability through the lens of five core principles:
- Responsibility: a person should be identified to deal with unexpected outcomes, not in terms of legal responsibility but rather as a single point of contact;
- Explainability: a decision process should be explainable not technically but rather in an accessible form to anyone;
- Accuracy: garbage in, garbage outis likely to be the most common reason for the lack of accuracy in a model. The data and error sources need then to be identified, logged, and benchmarked;
- Auditability: third parties should be able to probe and review the behavior of an algorithm;
- Fairness: algorithms should be evaluated for discriminatory effects.
This article is an extract from my new book “introduction to Data: Everything you Need to Know About AI, Big Data and Data Science” edited by Springer and coming out next year.
Machine Learning Ethics – are you preparing for future challenges?
Thanks again to Francesco for that interesting review of issues to consider. Plus, I will watch out for that book for a future book review. Love the shameless plug, Francesco! 🙂
What about your Machine Learning products? Are you considering Machine Learning Ethics and potential frameworks for accountability or liability? It sounds to make sense to get on the front foot with this one. I hope these two posts have helped prompt your own reflections & plans for action.
Until then, have a great month – and keep thinking ethically.