We live in a world where algorithms can make decisions and data fuels innovation. It means ethical considerations are more critical than ever for business. They must balance using new technology to increase competitive advantage while preserving integrity and protecting customers.
In our podcast Insight Story, experts Tomoko Yokoi (Switzerland,) senior business executive and researcher at Global Centre for Digital Business Transformation, IMD Business School and Andy Crouch (UK,) consultant and co-founder of ethical-AI natural language processing company, Akumen, outline how AI biases can impact business and what steps they can take to ensure its fairness. Kaspersky Global Research and Analysis Team’s Dr. Amin Hasbini expands on the privacy and responsible data use implications.
Not all AI is created ethically equal
Andy’s company Akumen found a problem that needed solving. “Scores out of five are useful, but we wanted insight from written responses like product reviews, and there was no way to do it. The team created an AI solution to identify meaning like topics, emotions and sentiment. Sentiment measures opinion – positive, negative or neutral – but emotions drive behavior. It works on text feedback anywhere, which might be about consumer goods, healthcare or anything else.”
Their approach uses AI differently from generative AI tools like ChatGPT. “Our AI is rule-based, human-created and human-curated. It’s completely transparent and there are no algorithms as with large language models. We can dive in and make rules more nuanced if we recognize bias. With large language models, that would be complex and expensive.”
Andy expands on generative AI’s limits for truly understanding people. “We asked ChatGPT how many emotions humans experience – it said 138,000. That doesn’t help us understand what drives behavior. Our platform has 22 emotions – enough to see what drives behavior. Through our partner, Civicom, we’re helping the UK’s national health service (NHS) to understand what patients and staff experience.”
And that understanding can improve lives:
Using AI to understand people’s emotions and what they’re talking about, you can quickly extract reliable insights. And if anyone questions things, you can show why the system’s highlighted this and, if needed, modify.
Andy Crouch, consultant and co-founder, Akumen
Larger language models use big data pools, but there are also more contained, enterprise-level tools like ChatGPT Enterprise that businesses can furnish with their own data and control how they use it.
Tomoko sees enterprise-level tools as useful but notes they can’t do what big data can do. “Organizations are developing new functions around AI, like data annotators, who clean data before it goes into models. But is it foolproof? The beauty of using data from everywhere is it gives you insights you otherwise wouldn’t get.”
Choosing ethical suppliers
Luckily for companies using AI ethically, more businesses are adopting digital responsibility policies and choosing ethics-first suppliers.
Tomoko gives an example. “Deutsche Telekom has been a pioneer in AI ethics. They’ve trained all employees to ensure AI ethics are distributed throughout the organization. At the same time, they have about 300 suppliers and ensure it’s in all their contracts. So it goes beyond the boundaries of the company.”
But many businesses don’t know where to start. Tomoko says, “Over 250 companies have committed to AI ethics, but codified mechanisms only help if they change behavior. How can we live these principles and ideals? External experts can help, and there’s a case for individuals taking responsibility, which will have a collective impact.”
She suggests how companies frame AI ethics matters. “You can see AI ethics as value or as compliance. If it’s compliance, it will be cost- or risk-driven. But AI ethics could also be a competitive advantage.”
Andy compares AI ethics to health and safety. “If you have a health and safety director, it’s only one person’s responsibility. Change won’t happen unless everyone understands health and safety’s importance, and especially that it drives productivity and revenue.”
The competitive advantage is real. McKinsey research found 72 percent of customers considered a business’s AI policy before making an AI-related purchase.
Tomoko highlights the importance of backing up policies with action.
Companies making public commitments must change as an organization, embedding new practices. Have a grand goal of committing to AI ethics and digital responsibility, but divide it into tangible, more easily executed sub-goals.
Tomoko Yokoi, senior business executive and researcher, Global Centre for Digital Business Transformation, IMD Business School
Which AI issues should companies care about?
Tomoko outlines three places to look. “First, consider the software development lifecycle. If you’re considering developing an AI product, think of how it’s designed. Look for bias in the data.
“Second, once it’s being developed, although many companies say they’re implementing AI ethics, people developing AI-driven products don’t know how to apply those principles. So, look at how people use ethical principles in day-to-day software development.
“Third, we test products in controlled environments. Once it launches, ask who is monitoring it and how we ensure it doesn’t gather bias and that people use it correctly.”
Tomoko is part of IMD Business School and knows that what future executives learn about AI ethics will shape future companies’ ethical behaviors with AI. She says, “First, we say everyone has a responsibility to these issues that goes beyond the company. You need to be aware of this responsibility, but also be able to make others in your team aware.”
Secondly, “What type of organizations do we want to build? We coach people to be able to handle multiple goals – not only profit but also social, environmental and ethical goals. We want them to walk away thinking of the future.”
Andy drills down into the data AI is using. “Understand how the AI model is built. Is the data you’re analyzing through that AI model ethically sourced, and are you using it ethically? The lack of transparency over large language models is rife for ethical risk and bias.”
AI training data bias can have life-threatening impacts. Poor AI translations have been found to be jeopardizing asylum claims. Andy sees retrieval-augmented generation (RAG,) which uses more proofed datasets, as part of the solution.
Can we have secure and well-regulated AI?
Dr. Amin Hasbini. Head of Research Centre, Middle East, Turkey and Africa for Kaspersky Global Research and Analysis Team, thinks AI ethical standards are needed. “AI won’t self-define its ethics. They must be programmed with ethical standards.”
Since there is almost no way the public can evaluate, critique or improve AI ethics, regulation must play a part, according to Amin. “We need security and safety by design, and continuous verification of it. That would require transparency, especially from big tech vendors, and letting the public influence how these technologies develop.”
He likens the challenge to that of regulating social media. “We’re asking people to adopt technologies that can do much damage without giving them ways to ensure that doesn’t happen. The same has happened before with social media, with it being used for data leaks and fake news. European Union regulation is moving fast around AI, but AI could be much more dangerous than social media – we need rules now.”
For improved ethical data use, Amin recommends asset management controls. “If well deployed, asset management controls allow data to be classified, including which is available to AI, which can be shared publicly and which needs to stay inside the organization.”
Andy says regulation is hard in this fast-moving space because no one knows what’s coming next. “I question anyone saying they know what will happen in the next six months or beyond. But there’s a lot of fear and lobbying going on – so go slow. If your AI-driven capability can’t deliver because it’s non-compliant, ethically or otherwise, it will be damaging.”
However, he believes regulation is necessary. “It will be interesting to see how they regulate something that’s not easily defined and morphs quickly, but we must protect those who need protecting.”
Kaspersky has recently proposed six principles for ethical use of AI in the cybersecurity industry with transparency at the core.
Getting started with AI ethics
Our experts have straightforward advice for those business executives yet to approach AI ethics.
Tomoko says, “As a mindset, remember the analog and digital worlds are the same. Your analog-world values should extend into the digital world.”
Andy highlights the need for both widespread knowledge and deep expertise. “Get your whole team conversant with AI, but have a well-informed friend who lives and breathes this stuff to call when there are challenges.”
With headlines about AI taking our jobs and AI founders like Geoffrey Hinton sounding the alarm on unregulated AI perils, it’s easy to write off AI ethics as a problem too hard to fix. But these complex issues need priority.
There are green shoots of change. In December 2023, the AI Alliance launched to focus on developing AI responsibly, including safety and security tools. Its 50 members include Meta, IBM, CERN and Cornell. The message may be, ‘Let’s not move too fast and not break things.’ With OpenAI, creators of ChatGPT, not invited to the party, could the tortoise of collective corporations beat the nimble hare of innovation?
AI gives business potential for great gains but comes with great risks to reputation, security and privacy. With strong ethical AI policies translated into action and widespread knowledge among employees, businesses can have more confidence to take advantage of AI’s many benefits.