How to overcome organisational barriers to adopting artificial intelligence

AI can seem scary for many people

Technology that seems to think for itself raises concerns in humans. People inherently distrust autonomous machines and artificial intelligence (AI) can seem scary for many people. As AI becomes a fact of life in daily business, this fear of the unknown can significantly hinder project leaders tasked with getting buy-in across an organisation for adopting AI-based tools. 

The key to overcoming this scepticism is to help users understand how AI works so they can trust AI-generated insights. Showing is always more powerful than telling so, to increase understanding, project leaders need to demonstrate the important variables and trends around the outputs the AI tool is targeting. 

1. Change the variables affected by the algorithm 

It’s possible to reveal the inner workings of the tool by showing that the outputs of the algorithm are sensitive to changes with certain variables. This makes it easier to understand how the AI tool came to its recommendations and why it may have discounted others. 

2. Change the algorithm itself 

By changing the algorithm itself, it can become clear which variables play the most significant roles in the outcome. The algorithm includes a complex network of many nodes. Removing a layer of nodes and then assessing the impact can show people how it works. Sometimes, a slight change in one variable leads to a significant change in the output. 

3. Build global surrogate models 

Where the AI algorithm is complex, you can build a surrogate model in parallel, which is simpler and easier to explain. For example, it could be a tree, a linear model, or a linear regression that mimics the more-complex AI network. While the results won’t necessarily align perfectly, the surrogate model’s results should strongly echo the AI tool’s results. When this happens, users will understand some of the steps involved in the AI process. 

4. Build LIME models 

Surrogate models that are localised are called local interpretable model-agnostic explanations (LIME). LIME models mean you don’t have to replicate the entire model linearly. Instead, you can generate synthetic samples around an event, then create models just for that event, locally. This helps users understand which features are important in doing a linear classification around the event. 

By doing one or a combination of these four options, project leaders can help develop understanding among stakeholders that lets them accept the usefulness and even necessity of AI. 

However, this doesn’t mean projects will automatically get off the ground. The next stage is to build trust before presenting any controversial or challenging hypotheses. There are three steps to doing this effectively: 

Step one: Detect events and trends that conform to people’s expectations 

When projects live up to people’s expectations, they develop a degree of comfort regarding the processes and technologies used. When users know what the outcome will be, and the AI tool confirms their expectations, they gain confidence that the AI tool will get it right even if they don’t know the result in advance. This increases the level of buy-in. 

Step two: Use different criteria for event and non-event cases 

Unconscious bias can mean that humans change the way they examine facts. For example, if they’re trying to detect fraud, the brain will go through different processes when examining a case that looks like fraud versus one that doesn’t. It’s possible to take the same intuitive approach with AI, in which the tool can show why one event triggers a fraud alert and another doesn’t. This can show users that the tool operates in a familiar and trustworthy way. 

Step three: Ensure detected outcomes remain consistent 

Businesses can only make decisions based on statistically significant results. For results to be statistically significant, they must remain consistent over time and be replicable, whether they were generated by AI or some other method. When a possible fraud event is run through an IT model, it should be flagged consistently and for the same reasons every time. Stability is the key to establishing trust: users need to see that the system is reliable. Companies can build user interfaces that help to bring the backend of the AI tool into the daylight to illustrate to users what is occurring. 

Once project leaders have established understanding of and then trust in AI tools, they can begin to solicit buy-in from employees. When AI is less threatening and hard to understand, employees are more likely to support its use. While it’s natural to view any new technology with scepticism, the insights generated by AI can dramatically help to reshape how a company operates so organisations should do everything they can to ensure buy-in. 

Alec Gardner is director, global services and strategy, at Think Big Analytics.

Join the newsletter!

Or
Error: Please check your email address.

Tags Teradataartificial intelligence (AI)Think Big Analytics

More about Technology

Show Comments
[]