Ethical AI Depends on a Very Human Practice

James Kotecki
Machine Learning in Practice
5 min readJan 2, 2020

--

Take a closer look at the machine. Photo by Shane Aldendorff on Unsplash.

ethics: the discipline dealing with what is good and bad and with moral duty and obligation

Merriam-Webster

There’s a lot of talk about ethics in artificial intelligence these days. But it’s helpful to remember that AI doesn’t have ethics. Humans do. The definition above makes clear that ethics deals with fundamentally human questions. What is “good and bad”? What is “moral duty”? Humans have been debating these issues for millennia, and we can’t expect computers to suddenly give us the answers. However, the rise of AI in the form of machine learning adds new complexity and urgency to the conversation.

Machine learning that automates or guides human processes can lead to a number of vexing (and news making) ethical quagmires, from racist photo labeling to sexist hiring practices to self-driving cars that endanger human lives. But don’t blame algorithms for ethical failures.

“At the level of how we conceive of these technologies, it seems completely nonsensical for us to say that a function — a linear regression — is racist,” said Kathryn Hume, then VP of Product & Strategy at integrate.ai, in a podcast conversation with me. “That doesn’t make much sense.”

Humans must take responsibility for their algorithms, for the data that powers them, and for the ways that they are — or are not — monitored and adjusted. AI is a tool, and the real issue is how humans can use it in an ethical way.

Machines Need Ongoing Human Oversight

Two obvious ways to build unethical AI are to do so intentionally — a failure of human morality — or to use flawed training data — a failure of human technique. But while both of those involve human action, a third way to create unethical AI doesn’t take any action at all. All it requires is for humans to do nothing once AI has been deployed into the real world. This is the failure of human attention.

After all, there is (or there should be) a fair amount of human attention when machine learning is being developed and initially deployed. But involvement may drop off during the production phase as team members move on to other projects. So what happens when a system adjusts to new data or processes “expected” data in unexpected ways? That’s when you could discover algorithmic problems or data flaws you didn’t anticipate when you built the system in the first place. And if you’re not keeping an eye on the system, that discovery could come much too late.

With AI , the truly unethical thing to do is to “set it and forget it.” Businesses that deploy AI should expect bias and other flaws to rear their heads eventually. And they need a plan to deal with that. Ethics are a human thing, and so if we want machine learning to be ethical, humans need to stay involved.

Full Explainability Can Be An Impossible Standard

Of course, keeping an ethical eye on your algorithms can be a challenge; since machine learning systems aren’t explicitly programmed, there is no human-written set of rules to reference when something goes wrong. The topic of explainability in AI is an important one. But demanding full explainability holds the algorithm to a potentially impossible standard, and certainly a standard higher than the capability of humans we would otherwise trust with the job.

That’s because humans may not be able to fully explain their own motivations or decisions. Vincent Vanhoucke, Principal Scientist at Google, puts it this way:

There is mounting evidence from neuroscience that those stories we tell, those explanations that we provide for our actions, are mostly mere post-rationalizations. Chances are, ‘I am thirsty’ never crossed your consciousness before you picked up that glass. Perhaps, your hands were idle, and your body was craving for activity of some sort. Or maybe you were saying a joke, and the movement of picking up the glass punctuated your punchline elegantly. Or part of you was eager to leave soon, so get this boring conversation over with. And maybe all of those factors came into play, and maybe none, and maybe none of those proximal causes crossed the threshold of your consciousness either.[1]

Instead of ‘Explainable’, Focus on ‘Auditable’

Although machine learning may not be fully explainable or understandable, it certainly needs to be auditable. This means both that businesses should have access to people capable of auditing their algorithms, and that algorithms should be constructed in an auditable way from the start.

While the true motivations of the human mind can be murky, the ability to audit human behavior is a standard business requirement. Anything that makes a business decision has to be able to be audited — financial numbers, compliance, employee behavior, or anything else that happens in a company. That’s the reality of the business world. There is no human oversight without the ability to oversee.

If something goes wrong, businesses need to know what happened. Of course, not every detail will be necessary, and perhaps in an ML context, not every detail will even be knowable. Furthermore, different leadership levels in the same company may require different levels of monitoring to meet an ethical bar. A C-suite executive might want a summary of the technology, the data that went into it, how it was coded and designed to begin with, and how (and how often) it’s being monitored. Someone who is less senior and closer to the deployment may have more detailed questions.

But in general, people need to be able to assess the logic of what AI is doing and describe it to others. A CEO wouldn’t hire an employee or subcontractor who could never be questioned. So why would they agree to put technology into their business that could never be investigated — especially if the technology were a part of sensitive business processes for which the company could receive legal and public scrutiny?

AI, like all tools, is neither good nor bad. Reserve moral judgements for the humans who use and guide the tool. Because AI can yield unexpected results when deployed into the world, it is especially important that humans remain in the loop to monitor and adjust the tool as needed. The reasons why a system performs a certain action may not always be fully explainable, but humans must have the opportunity to ask questions and alter course.

The oft-quoted Spider-Man aphorism that “with great power comes great responsibility” is certainly applicable to machine learning. The benefits of the technology are enormous, and ethical guidance considerations must factor in to the cost of investment. Having no oversight is unethical. Having perfect oversight is impossible. The solution lies somewhere in between

James Kotecki is the Director of Marketing & Communications at Infinia ML, a team of data scientists, engineers, and business experts putting machine learning to work.

[1] Interestingly, there is now some reason to reconsider the idea that your brain decides to do something before you know you’ve decided, which is part of the evidence that Vanhoucke cites. I still like his broader point about the inability to fully explain one’s own actions, however.

--

--

James Kotecki
Machine Learning in Practice

VP, External Affairs for Agerpoint, a spatial intelligence platform for crops and trees. Also a talk show host for CES.