How machine learning works: promises and challenges?
While machine learning is fueling technology that can help workers or open new possibilities for businesses, there are several things business leaders should know about machine learning and its limits.
Explainability
One area of concern is what some experts call explainability, or the ability to be clear about what the machine learning models are doing and how they make decisions. “Understanding why a model does what it does is actually a very difficult question, and you always have to ask yourself that,” Madry said. “You should never treat this as a black box, that just comes as an oracle … yes, you should use it, but then try to get a feeling of what are the rules of thumb that it came up with? And then validate them.”
This is especially important because systems can be fooled and undermined, or just fail on certain tasks, even those humans can perform easily. For example, adjusting the metadata in images can confuse computers — with a few adjustments, a machine identifies a picture of a dog as an ostrich.
Madry pointed out another example in which a machine learning algorithm examining X-rays seemed to outperform physicians. But it turned out the algorithm was correlating results with the machines that took the image, not necessarily the image itself. Tuberculosis is more common in developing countries, which tend to have older machines. The machine learning program learned that if the X-ray was taken on an older machine, the patient was more likely to have tuberculosis. It completed the task, but not in the way the programmers intended or would find useful.
The importance of explaining how a model is working — and its accuracy — can vary depending on how it’s being used, Shulman said. While most well-posed problems can be solved through machine learning, he said, people should assume right now that the models only perform to about 95% of human accuracy. It might be okay with the programmer and the viewer if an algorithm recommending movies is 95% accurate, but that level of accuracy wouldn’t be enough for a self-driving vehicle or a program designed to find serious flaws in machinery.
Bias and unintended outcomes
Machines are trained by humans, and human biases can be incorporated into algorithms — if biased information, or data that reflects existing inequities, is fed to a machine learning program, the program will learn to replicate it and perpetuate forms of discrimination. Chatbots trained on how people converse on Twitter can pick up on offensive and racist language, for example.
In some cases, machine learning models create or exacerbate social problems. For example, Facebook has used machine learning as a tool to show users ads and content that will interest and engage them — which has led to models showing people extreme content that leads to polarization and the spread of conspiracy theories when people are shown incendiary, partisan, or inaccurate content.
Ways to fight against bias in machine learning including carefully vetting training data and putting organizational support behind ethical artificial intelligence efforts, like making sure your organization embraces human-centered AI, the practice of seeking input from people of different backgrounds, experiences, and lifestyles when designing AI systems. Initiatives working on this issue include the Algorithmic Justice League and The Moral Machine project.