Can we trust AI if we don’t trust each other?

Wall Street Journal Best Seller, Helen Yu shares reflections about the inner workings of AI, the causes for distrust, and the potential route organizations need to take for nurturing trust

Author
Date published
July 13, 2021 Categories

30-second summary:

  • AI is only as effective and trustworthy as the data quality and people handling it
  • Countries are now developing regulations around the use of AI to make it traceable, trustworthy, and equitable
  • A humanistic approach, appropriate education around security and ethical technology can help us cross this trust threshold

The speculation of whether or not to trust AI (Artificial Intelligence) is widespread. Such assertions are often limited to a dystopian view. Some say AI heralds the end of life as we currently know it. And that may be true, but with change comes new beginnings. Oh, but there’s the most dreaded word of all: change.

Fear is perhaps one of the easiest emotions to get caught up in when faced with a changing world. And there’s little doubt that change is afoot. Tech and its capabilities are advancing and with it businesses and markets. People are adjusting to technology in ways they never have before.

The fact is: if we put trust in AI, we will receive it. If we build secure AI that brings humanity and technology into focus, artificial intelligence will expand its ability to be more humane. How can we ever trust a machine if we cannot trust each other? How do we make humanistic and ethical technology unless that too is prioritized in our lives and businesses?

To err is human: Why we resist trusting AI

So, what stands in the way? Truthfully, it is ourselves.

Mostly, what puts AI and data at risk is human error. The data on file is not accurate or as extensive as it should be. The input systems are outdated or irrelevant. AI is only ever as effective as the quality of its data. AI is susceptible to data bias and other misrepresentations of data information during ideation and development, leading to undesired outcomes. This can be a problem as models are developed based on AI systems. This is like building a house on a weak foundation that later becomes susceptible to cracks and leaning.

There is another issue that arises: the data may be accurate and reliable, but there are security and privacy oversights. Delegating mundane tasks and information to AI feels convenient, but the safety of the data itself is an afterthought. This is dangerous.

Then some bad characters play more malicious roles: intentionally partaking in data theft, introducing corrupt processes, ruining the purity of the data and with it company reputations and finances. This destroys the trustworthiness of artificial intelligence. The victims of data theft are not the only ones who suffer. The whole world watches, wondering how safe and secure the AI systems they depend on truly are. But, it is rarely solely AI that is at fault. By making AI trust and risk management a cross-organizational effort, AI trustworthiness can be steadily built.

Governing AI: Maintaining trustworthy systems

While many companies recognize the value of AI and adopt it into their frameworks, building trustworthy AI is a somewhat newer science. As artificial intelligence becomes prevalent in all sectors of the economy, fairness and ethics are more important than ever.

Countries are developing more rules and regulations involving the usage of AI. Going beyond what is not only mandatory and expected is a responsibility that all of us share. We must also do what is equitable, sustainable, and responsible. If we create artificial intelligence that is trustworthy and founded on compassionate principles and premises, then the future before us is promising.

Everyone within a company should be knowledgeable about the promising future of AI as it stands with elevating human compassion, even community. AI governance is part of maintaining and upholding that trustworthiness.

Training in AI concepts, security, and privacy is a necessity in the ever-evolving technological world. This is a significant step in preventing poor or misrepresented data. Accountability and ethics should be taught alongside AI education.

A humanistic approach means knowing the difference between what is valuable and what can lead to data bias. Analysis, security, and protection should be implemented from the ideation to the modeling stage of AI.

Cross-checking and investigating both the data and how the AI responds and functions according to it leads to valuable insights. Those insights hold keys to improving data, AI systems, customer satisfaction, innovation, and even revenue growth. There is great value in governing AI to be traceable, explainable, trustworthy, and equitable.

Explainable AI

Hesitation is a common experience when contemplating the adoption of artificial intelligence. Perhaps team members and employees fear that AI will replace them, or stakeholders are apprehensive. Explainable AI makes the inner workings, processes, and predictions of AI more coherent. AI Explainability brings confidence across organizations when the stage of AI adoption has arrived.

Part of governing AI and ensuring that it is simultaneously valuable and ethical is to understand it  — and then explain it to those within the organization. Emphasis on transparency, privacy, and security allows us to appreciate better the role AI plays in our lives… and begin to trust it.

Protecting the Data: Lessons in privacy and security

In my many conversations with tech innovators at leading companies like IBM, Microsoft, and others, the unanimous thought is – AI is only as good as the purity or quality of its data. Yet, if the essential data is being fed into AI, how does it then become protected and secure? There are many different ways to certify that the data and AI systems are as safe as possible. Security and privacy are the core of what AI governance should involve.

Looking into the data itself and its purpose is essential. It is also just as necessary to keep track of where the information originated or was gathered from, and who received the data. This creates comprehensive information of potential data issues, tracking them to their source.

Training those who develop AI in privacy and security is just as important as having effective AI. They must be knowledgeable of the risks of artificial intelligence. Data breaches and AI perpetuating bias through faulty algorithms and poor quality data is something to take seriously.

Training around AI is key

Everyone in an organization should receive training on privacy and security in addition to ethics. The latter is a motivator for keeping data safe from potentially unethical hackers and algorithms. Encryption of datasets, training, and processes are best practices especially needed during any stage of the life of AI. Making artificial intelligence more safe and secure will allow us to better trust and manage it.

How can we trust AI if we can’t trust each other?

Ultimately, AI is as trustworthy as people are. That is why the focus of humanity in tech is especially essential in this current world stage. We are beginning to “teach” AI what it will become and adjusting to those changes.

Truly intelligent AI is far off, but it no longer feels like a matter of science fiction either. AI that brings compassion, ethics, accountability, and security into view is invaluable. Governing AI beyond the rules and regulations expected of us to be exceptionally fair is our responsibility. Recognizing its pitfalls, such as insufficient data or bad algorithms, and identifying AI’s more vulnerable points help us prepare for unpredicted or unwanted outcomes. Confirming that AI is cohesive, explainable, and easy to understand allows us to trust it better. Ensuring data is secure and accurate is a necessary part of making sure that it is, in turn, ethical.

We also must practice more kindness and compassion with our fellow humans. We can only trust a machine as much as we can trust ourselves. That concept can be both frightening and enlightening. Navigating a world where technology intersects with every aspect of our lives confronts our humanity, in a sense. We have more information available to us than ever before. We are faced with the complexity of ourselves, our uniqueness and our similarities reflected back at us. Perhaps that is the true fear of AI – it will reveal more about ourselves than we wish to know.

I do not think this revelation will be something to fear. We can use AI to create a more humane world and future for all of us. In fact, I fervently believe that it is at the crossroad of technology and humanity where we find growth.

Lack of trust in building a better future stifles innovation. Looking forward with optimism, hopefulness, and putting trust in the unknown is a mindset that facilitates growth and compassion – allowing us to become better people. Recognizing where we have room to improve gives way to greater self-awareness and that, too, leads to growth. Knowing where we refuse to trust others in our lives, where we are most vulnerable helps to cultivate greater empathy.

Trusting AI is the easier thing to do. Trusting each other is perhaps more challenging, but it is what we are called to do if we are to build a solid foundation for the future of work and life.


Helen Yu is a Wall Street Journal Best Selling author and keynote speaker. She has been named a Top 10 Global Influencer in Digital Transformation by IBM, Top 50 Women in Tech by Awards Magazine, Top 100 Women B2B Thought Leader in 2020 by Thinkers360 and Top 35 Women in Finance by Onalytica. You can find Helen Yu on Twitter @YuHelenYu.

Subscribe to the ClickZ newsletter for insights on the evolving marketing landscape, performance marketing, customer experience, thought leadership, videos, podcasts, and more.

Join the conversation with us on LinkedIn and Twitter.

Exit mobile version