Digital TransformationStrategy & LeadershipAI is driving a new business function: Digital ethics

AI is driving a new business function: Digital ethics

Big data and AI call for a new business competency: Ethics. With cyberthreats and new regulation, 2018 has been a pivotal year, and leading organizations are rising to meet that.

Artificial intelligence (AI) and machines’ ability to “learn” marks a new chapter in digital transformation; it breathes new life into the potential for unstructured data and software and marks a profound shift in interface and customer experience. It also introduces unprecedented risks and societal questions enterprises have yet to confront.

2018 has marked a pivotal year in tech, a year in which such questions and risks have become mainstream. From revelations of fake news and mass manipulation, to ever-more-pernicious cyberthreats, to unprecedented regulatory moves, tech is in the crosshairs.

Despite a widespread crisis of confidence, enterprise preparation for AI has centered almost exclusively on data prep and data science talent. But true enterprise preparedness for AI must ready the broader organization, chiefly people, processes and principles. Now more than ever, the mass automation of big data and AI call for a new business competency: a formalized and grounded approach to ethics.

ai ethics categories: bias, transparency, organization

1. Organizational Support: Scope the role and charter of the Ethics function

While virtually all enterprises have a compliance team, the past few years have brought about a realization that the scope of ethics is distinct.

While compliance asks if a company can act in a certain way, the role of an ethics function is to ask whether a company should act in a certain way.

More and more, companies are formalizing the role of Chief Ethics Officer or internal or external ethics boards. Microsoft, Facebook and law enforcement weapons manufacturer Axon have all assembled teams dedicated to addressing AI Ethics in the products they’re building. These are important roles for guiding executives and the organization through critical discussions and process development:

  • What could happen? Analyze scenarios across emerging technologies, identify risks and legal issues
  • What would ensue? Think through ethical issues and implications of products/services (including conflicts between professed values and underlying business model)
  • How can we support responsible stewardship? Develop ethical guidelines for companies, even a code of ethics
  • How can we support day-to-day decision-making and accountability? Instituting programs such as training courses, design thinking groups, scenario or social systems analyses, or audits
  • What skills are we missing? Expand ethical awareness by diversifying teams. In medicine, for example, ethics teams don’t just include doctors and lawyers, but educators, philosophers, designers, psychologists, sociologists and artists 

2. Bias: Address biases within and without

Biases can and will exist across AI interactions: in the development of algorithms, services, form factors and machine “personalities,” as well as in the context of user experience. After all, machine logic is inherently discriminatory in that it intrinsically discerns, categorizes, separates and recommends. This is not a reason to reject the technology, but it is a reason to assess bias more deeply than most businesses are accustomed to doing.

When it comes to human biases, some biases are conscious preferences, but others are far more unconscious prejudices. While programmer bias can directly impact how models are built and what data sets are used for training, we all are subject to influencing how bias plays out in AI because we all contribute interactions with these systems.

When it comes to data and algorithmic biases, data used to train machine learning systems can be are inadvertently weighted to over- or under-represent certain data points or populations over others. Sometimes this is a function of data access, other times a reflection of deeply entrenched societal biases, still other times, it is merely a bias toward revenues.

Here the Ethics function should assume a leading role, not only in driving awareness of these issues, including their impacts to the bottom line, but also developing processes to identify, document and research mitigation techniques. Here are a few best practices companies can apply:  

  • Conduct a “pre-mortem” or pre-design workshop or roundtable to identify how AI systems could be abused, hijacked or cause harm to users.
  • Audit training data. Transparency into training data is key so that we can have a sense of what biases are built in and how to account for them.
  • Invest in variations to mitigate over/under-representation in data, such as testing sampling, learning, anomaly detection methods, testing different algorithms for different groups, and identify those most likely to be excluded.
  • Use tech to address issues with tech. Developing AI-based tools to recognize and measure for bias has become a common tactic among tech companies like MicrosoftAccentureIBMFacebook and numerous startups.
  • Some organizations deploy in-house teams to support training, best practice sharing and R&D with ethical design support. For example, Microsoft’s Inclusive Design Program offers principles, frameworks, education, activities, examples and suggested stress tests “to shift design thinking towards universal solutions.”

3. Transparency: Drive clarity in the face of opacity and complexity

While contractual agreements and audits have long been part and parcel for legal and compliance teams, AI and automation introduce a new class of threats around which businesses are well advised to exercise much greater transparency.

From new interfaces like voice or biometrics to automated tagging, profiling and process execution, AI ushers in a universe of new questions and liabilities which have little legal precedent. AI explainability—i.e., the ability to see “inside” AI systems and understand what factors, weighting, and parameters determined any given outcome or decision—is a significant challenge particularly for regulated industries. This is problematic in terms of low accountability, regulatory compliance, anti-discrimination, consumer protections, and erroneousness in the model.

Another area for increased transparency is in AI’s impact on regulatory compliance adherence—both existing and new data protection regulations like the EU’s Global Data Protection Regulation (GDPR). Such a regulation expressly limits automated profiling and requires data controllers inform the users “meaningful information of the logic involved, as well as the significance and envisaged consequences of such processing for the data subject,” among a host of other new rules.

Ethics and compliance teams must work in tandem with IT, Product, Security, and beyond not only to adhere with laws, but also align systems with organizational values. Instead of burying communications about data in the Terms of Service, organizations must address trade-offs head on, explain the value of data partnerships, develop processes for data quality assurance across partners, be transparent about the use of AI interfaces, and equip front-line employees with the relevant training, content and communications.

quarterly news mentions of AI or artificial intelligence and ethics 2014-2018

AI is a long game, but now is the time to allocate resources to ethics

It is not a coincidence that the companies leading the charge in AI are those signaling their efforts to address AI ethics to the market. It was only in June 2018 (after numerous failures and intense scrutiny), that Google, one of the world’s leading organizations in the development of AI for nearly a decade, published seven principles “that actively govern our research and product development and will impact our business decisions.” Organizations early in their journey with AI can take advantage of these learnings, allocating resources to address ethics early on. Ethical preparedness for AI starts upstream, cultivating ethics in the culture of the organization, empowering people to do the right thing, and addressing issues with transparency, even if other corporate pressures may run counter.

Jessica Groopman is Founding Partner & Industry Analyst of Kaleido Insights

Subscribe to get your daily business insights

Whitepapers

US Mobile Streaming Behavior
Whitepaper | Mobile

US Mobile Streaming Behavior

5y

US Mobile Streaming Behavior

Streaming has become a staple of US media-viewing habits. Streaming video, however, still comes with a variety of pesky frustrations that viewers are ...

View resource
Winning the Data Game: Digital Analytics Tactics for Media Groups
Whitepaper | Analyzing Customer Data

Winning the Data Game: Digital Analytics Tactics for Media Groups

5y

Winning the Data Game: Digital Analytics Tactics f...

Data is the lifeblood of so many companies today. You need more of it, all of which at higher quality, and all the meanwhile being compliant with data...

View resource
Learning to win the talent war: how digital marketing can develop its people
Whitepaper | Digital Marketing

Learning to win the talent war: how digital marketing can develop its peopl...

2y

Learning to win the talent war: how digital market...

This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. It provides expert insight on how companies can ret...

View resource
Engagement To Empowerment - Winning in Today's Experience Economy
Report | Digital Transformation

Engagement To Empowerment - Winning in Today's Experience Economy

2m

Engagement To Empowerment - Winning in Today's Exp...

Customers decide fast, influenced by only 2.5 touchpoints – globally! Make sure your brand shines in those critical moments. Read More...

View resource