← Back to portfolio

Unsure about AI? IBM has the answer

Published on

Does your company use AI? You might be at risk.

As AI continues to progress across industries, many companies are finding themselves unable to address some of the most critical issues that AI presents.

How can we ensure the outcomes don’t discriminate?

Can we prove compliance to regulators?

Are our models still accurate?

Can the AI be trusted?

The proliferation of these questions across businesses provides a HUGE business opportunity for IBM.

Companies have a lot of questions around AI, but lack the expertise to properly address them—and IBM’s competition is completely absent with an enterprise solution.

This gives IBM a significant market advantage through MLOps Trust, an offering that provides clients with highly specialized Subject Matter Experts (SMEs).

The SMEs apply industry best practices using IBM technology to address clients’ unique challenges around Machine Learning and AI. Whether the client uses an IBM product or a non-IBM solution (such as MS Azure, Domino, AWS SageMaker, etc.) to build their AI models, our experts are able to help clients in a 6-week engagement.

Deloitte confirms: Business leaders are worried about AI risks and are not prepared for them

Deloitte recently released a report that confirms business leaders’ concern about the risks that AI poses to their business and underlines their unpreparedness to mitigate those risks.

Leaders expressed concern around model trust and transparency issues in 3 categories:

  • Fairness (Bias)
  • Explainability
  • Accuracy (Drift)

These categories can be broken down into 3 real-world examples:

EXAMPLE #1: Fairness

100 people apply for a job: 50 men and 50 women. 40 men are selected, but only 25 women. Is this fair? Can we safely assume that those selected were the most qualified? Or is there a flaw in how AI interpreted the data? The company would need to make sure it can understand how the model came to its conclusion and be able to evaluate it as biased or not.

EXAMPLE #2: Explainability

A customer is rejected for a loan, while his neighbor is accepted for the exact same loan and with excellent terms. Can the business provide an explanation to the rejected customer? The company needs to understand what variables determine the AI’s decision and be able to explain them on a human level.

EXAMPLE #3: Accuracy

What could be considered fraudulent activity one month may not be considered fraudulent the next (e.g., online shopping during COVID-19). How can we ensure that the ML/DL models remain accurate and aren’t providing bad business decisions? Companies need to make sure their AI remains accurate, no matter the external changes.

CHALLENGE

EXPLANATION

RISK

Fairness (“Bias”)

Am I certain that the model does not accidentally discriminate?

Lawsuits, bad press, backlash from customers, social responsibility issues, etc.

Explainability

Can I explain the outcome when asked?

Regulator and compliance issues, lawsuits, social responsibility issues, etc.

Accuracy (“Drift”)

Conditions change. Is my model still accurate? Has the data changed?

Bad business decisions based upon AI recommendations, AI failures affecting business operations, etc.






A surprising truth: data scientists are not the answer to AI issues

It might be surprising, but data scientists are not the ones responsible for solving these issues.

Fairness, explainability, and accuracy generally fall outside the scope of a typical data scientist’s responsibilities. Data scientists are generally focused on building and training models; not on what happens post-deployment.

Let’s compare with the issue of a sluggish website. How do you determine who is responsible? Was it badly designed? Is the network malfunctioning? Is the computer itself broken?

Like a slow website, the issue of AI model trust and transparency is not as simple as it seems, and requires a specialized team of experts to analyze and correct it.

What we do know is that ignoring the issue will not make it go away.

Deal Opportunity: IBM can help and train clients around AI

As Deloitte discovered in their report, most companies do not have a solution at hand to address the issues around AI trust and transparency.

But IBM does.

IBM’s Data and AI experts have the know-how to help clients with their AI/ML use cases, while also training them on best practices and skills. Our experts can even help clients set up an ‘AI Center of Excellence’, or AI Factory, which created a solid information architecture for sustained AI success.

Once work is started with a client, these experts are also well positioned to make a strong case for IBM’s state-of-the-art AI products (e.g., the broader capabilities in Cloud Pak for Data), if the client happens to be using a competitor’s solution.

IBM views offering peace of mind to unsure business leaders as an opportunity to pave the way for full-scale AI adoption.

Just like any other purchase decision involving multiple stakeholders, removing buying obstacles is critical to moving forward and closing deals.

Take action now: Don’t wait until it’s too late

Full-scale AI adoption does not stop with helping customers buy the right IBM products to help data science teams build models. These models need to be brought to production (MLOps) and continually monitored for trust and transparency by properly trained resources.

Investing in data science and AI without establishing proper checks & balances is playing with fire.

Without proper safeguards, issues will eventually arise, and it will be too late to address them without causing significant damage such as negative press, government action, declining revenue, and more.

IBM is ready to take on the challenge and help the world prepare for the future of trustworthy AI—are you with us?



Resources