AI Has a Trust Problem. Here’s How to Fix It. – SPONSOR CONTENT FROM FORRESTER



Artificial intelligence is at a pivotal moment. The rapid emergence of generative AI has brought with it a landslide of predictions about AI’s growth and impact on business and society. But one issue might keep your organization from scaling AI to its full potential: trust.

More than half of consumers think AI poses a serious threat to society. And without trust in AI both inside your enterprise and beyond, AI will never scale to its potential.

The solution to this challenge is “trusted AI”: AI designed, developed, deployed, and governed to meet diverse stakeholder needs for accountability, competence, consistency, dependability, empathy, integrity, and transparency. Introducing trusted AI is a well-defined strategy that allows businesses to deploy AI in a way that reaps all its benefits while minimizing risk and doubt.

The Trust Gap

In the context of AI, trust means the confidence in a certain outcome. Users of AI—whether software developers building applications or consumers interfacing with chatbots—must have confidence that the outputs from the AI they’re using are accurate, unbiased, and useful. Inaccurate or unexpected results, whether genAI hallucinations or errors in text-based results, and embedded bias are among the top concerns curtailing business executives’ trust in AI.

How big is the trust gap in the enterprise? In a recent Forrester survey, 25% of data and analytics decision makers said that lack of trust in AI systems is a major concern in using AI, and 21% cite a lack of transparency with AI/machine learning (ML) systems and models.

Consumers, however, are much more skeptical, saying they want to know where the AI resides in their purchasing path and want more visibility into how the organizations they interact with use AI. A mere 28% of online adults in the U.S. say they trust companies using AI models with their customers, while 46% say they don’t. And more than half (52%) said they feel “AI poses a serious threat to society.”

For the growing swath of organizations that see AI as a key component to their growth, the trust gap must be addressed.

Building Trusted AI

Despite so much doubt and mistrust in the market, pulling back on AI initiatives at this critical juncture might be the biggest mistake a business could make. Given AI’s enormous potential, the solution to these challenges is not to adopt less of it but to use more trusted AI.

By definition, trusted AI includes “seven levers of trust”:

1. Transparency: To many users, AI is a black box. Explainable AI approaches can improve model transparency and interpretability.

2. Competence: AI is probabilistic. Machines learn from real-world data and thus reflect the uncertainty inherent in the world. Business leaders employing AI need to get comfortable with the fact that AI predictions are not deterministic.

3. Consistency: “Model drift” occurs when a model’s performance changes over time due to data changes or other factors. The best way to ensure AI’s consistency is to embrace ModelOps—tools, technology, and practices that help organizations efficiently deploy, monitor, retrain, and govern AI models.

4. Accountability: AI will never be perfect. So if your organization’s AI does go awry—such as when a chatbot for a site about eating disorders recommended visitors start counting calories—take responsibility, explain what went wrong and why, and enact clear steps to avoid repeating that mistake in the future.

5. Integrity: Assigning a chief ethics or trust officer in your organization can help guide its AI process and build trust both internally and externally. Even without such a position, an organization needs to clearly define which role is responsible for AI integrity.

6. Dependability: Trust in AI means having confidence in its results. And dependability breeds confidence. The most effective way to bolster AI’s dependability is to test it by simulating situations virtually before testing a model in the real world.

7. Empathy: Involving a broad and diverse group of stakeholders to test models and incorporate feedback can remove bias and embed a level of empathy for users and customers into AI models.

Of course, which of these levers your company focuses on will depend on such factors as your industry and your goals. But long-term success with AI won’t be measured by how many tools your organization deploys or how quickly you deploy them. AI success stories will be the businesses that gain true value from AI—and that is clearly contingent on how much your customers and employees trust the technology.


Looking for the right partner to help you along your AI journey? Get to know Forrester.



Source link

Scroll to Top