Eliminating artificial intelligence bias is everyone's job
With reliance on AI-based decisions and operations growing throughout the day, it is important to take a step back and ask if everything can be done to ensure fairness and mitigate prevention. There needs to be more awareness and training behind the use of AI. Not just for developers and data scientists, bur is also product managers, executives, marketing managers, and buyers. That’s the word from John Boezeman, chief technology officer at Acoustic, who shared his views on the urgency of getting proper AI.
Q: How far do physical efforts go to achieve fairness and eliminate bias in AI results?
Boezeman: Trying to prove a bias or skew in AI is a very difficult problem and requires a lot of extra care, services and financial investment to be able to not only find these issues, and the compensation. Many corporations have used biased or incomplete data in different models; understands that and changing this behavior requires cultural changes and careful planning within a company.
Those operating under data ethical principles will be well placed to avoid obstruction in AI, or at least be able to detect and cure it if identified.
Q: Are companies doing enough to regularly review their AI results? What is the best way to do this?
Boezeman: As new tools become available for AI analysis, we will see many more companies regularly reviewing their AI products. Today, many companies either buy a product that has an AI feature or rooted capability or is part of the proprietary feature of that product, which does not reflect the review capability.
Companies can also uphold the basic AI capabilities for a specific use case, usually in that AI detection level of usage. However, in each of these cases the investigation is generally limited. Where study becomes important it is in the “praise” and “action” stages of AI. In both of these stages, it is important to use a monitoring tool to avoid bias and monitor the results.
One of the best ways to help with AI auditing is to use one of the AI and ML services of the largest cloud service providers. Many of these vendors have technical tools and stacks that allow you to track this information. It is also important for the identification of bias-like bias or behavior to be part of the training for data scientists and AI and ML developers. The more people are educated on what to look out for, the more companies are prepared to identify and mitigate AI bias.
Q: Should IT managers and staff receive more training and awareness to reduce AI overhead?
Boezeman: Absolutely. Both data scientists and AI / ML developers need training on bias and skew, but it is also important to extend this training to product managers, executives, marketing managers, and buyers.
It's easy to fall into the trap of doing what you've never done, or to just go after a biased approach like many businesses have done before. . But with training around reducing AI bias, employees across your organization will be able to recognize bias rather than trusting that everything AI does is true. From there, your company can help minimize its impact.
Q: AI and machine learning initiatives have been going on for several years now. What lessons have initiatives learned in the most productive adoption and practice?
Boezeman: AI is not a panacea for solving everything. I’ve seen many attempts to throw AI at any use case, independent if AI is the right use case, all to enable a marketing story without delivering real value. The trick to successfully deploying an AI solution is a combination of data quality and the quality of the models and algorithms that guide the decision. Simply put, if you put trash in, you get trash out. Most successful centers have a lame use issue, and well-defined data to work with.
Q: What areas of the organization are seeing the most success with AI?
Boezeman: AI has many different levels, but for the most part they can be boiled down to three basic states: detection, recommendation, and automated action. Right now, the places I see are mostly used in search - sight, alerts, notifications - location. This is where the system tells you that something is going on unusual or outside of a known pattern, or that something is moving in a direction that you should take care of. People trust this type of interaction and model, and can easily collaborate if they want proof.
Marketers accelerate AI in the search space to find out how successful their campaigns are, for example. Another example is a potential buyer using an AI-powered solution to detect fraud or issues with the customer’s visit.
Where I still see a lot of excuse is in the states of praise and action. I used to be in a product that measured the best price for a product and an order for the display against a web store, based on many data points, from size, to profit, to time to point down, to a storage area used in a warehouse. And even this result, if you turn it on, could automatically act. What we found is that a lot of merchants like to see the recommendation, but they personally wanted to take action, and they also wanted to see a number of options, and finally, they wanted the tree. see a decision on why the system recommended a choice. When we first launched it, the "Why did the XYZ system recommend it?" functionality. Until we provided a way to let the buyer see who the proposal was based on, they did not trust it.
Q: What technologies or technology methods make the biggest difference?
Boezeman: There are many companies working in this field that are designing new, influential technologies every day. Spark and Amazon Sagemaker are two examples. The technologies that make the biggest difference though, are the ones that allow you to identify bias in your AI models. When AI algorithms are biased, they can lead to inaccurate and inaccurate results. By being able to see the bias in the system, you can then detect and mitigate the situation. As the industry grows, this will become a key baseline capability that all technology stacks must support.