Digital transformation

Can AI help mend trust in business damaged by trolling and fakes?

Bad actors spreading fakes and disinformation on social media has sent business on a quest for online authenticity. AI-based authentication tools could help.

Share article

disinformation technology

In 2021, artificial intelligence (AI) might have finally become better than humans at being human. A study found people trust AI-made fake faces more than real people’s faces.

The researchers warn their findings show widespread use of AI poses a threat to the public. They recommend stronger safeguards restricting code to those with good reason to use it.

As deepfakes and manipulated imagery increase, we’ve seen a counter-balance in businesses wanting to communicate with customers in a more authentic voice. This also means rooting out manipulated sentiment and trolling, so we see governments and brands contracting tech firms to reveal those spreading disinformation on social media.

Could we get better at spotting fakes?

In a world where four in ten teens can’t tell the difference between fake and real news, rebuilding trust must be a business priority. But how AI technology has advanced makes it complex.

It’s now easy to create many synthetic images and videos using AI systems that analyze thousands of real images to learn how to make fakes. In the past, that workload would’ve needed many trained people.

The FBI recently alerted that HR managers have spotted people using deepfakes to apply for remote jobs. But why?

“Fake employment applicants may be hoping to access and hack the hiring company’s IT systems,” says Kaspersky senior data scientist Dmitry Anikin. “It’s happening, but today, it’s hard to do well. Making a high-quality deepfake needs much data and computing power, especially for online meetings. We should prepare for the threat to grow, because technology doesn’t stand still.”

Fighting to maintain online credibility

All business relies on trust. So how can business operate with the albatross of fakery hovering above?

Luckily, companies like Truepic offer solutions. Truepic’s software development kit (SDK) integrates with apps that rely on images, verifying media in real time.

It checks image metadata – like when, where and on which device it was created – then cryptographically signs it, embedding validation in the file. This lets businesses on the back end know if content is real or synthetic.

When people can manipulate images and videos, it creates a zero-trust environment where we don’t know if what we’re looking at is real or fake.

Nick Brown, vice president of product, Truepic

Nick Brown

Nick Brown, Truepic

Truepic’s clients include credit reporting company Equifax and car manufacturer Ford. When Ford pays out on a warranty claim, they need to know if car parts or damage photographs are legitimate or have been manipulated or pulled from the web.

Brown says tagging tools like Truepic come at a time of brand and consumer frustration with social media companies doing little to stem disinformation. “We’re in that sea change now. A consumer push for authenticity is starting to pressure social media platforms to take notice.”

But firms like Truepic aren’t sitting on their hands waiting for Facebook, Twitter and TikTok to act differently. Truepic belongs to Adobe’s Content Authenticity Initiative (CAI), a community of tech companies, media, non-government organizations (NGOs) and others promoting an open industry standard for proving content authenticity. This year, Adobe released a three-part open-source toolkit to help get trust-building technology into developers’ hands faster.

Rooting out social media trolls

As the unreal world nudges its way into everyday reality, brands and consumers want to see trolls and bots taken off social media.

ZeroFox, Astroscreen and Cyabra use AI to identify disinformation disseminators and ‘sock puppets‘ – fake profiles creating an illusion of wider support for (or opposition to) an idea or company.

Cyabra calls itself a “social media search engine designed to track and measure the impact of fake accounts.” Companies like Cyabra recognize social media platforms aren’t doing this enough.

Rafi Mendelsohn, vice president of marketing at Cyabra, says, “We look at the ‘breadcrumbs’ to find out if an account is real or not.” Breadcrumbs might be, for example, a user posting 23 hours a day or multiple users Tweeting the same text.

Rafi Mendelsohn

Rafi Mendelsohn, Cyabra

Brands using Cyabra include Warner Media, which faced a deluge of Twitter trolling when they announced the first female-led superhero film, Wonder Woman, in 2017. “We helped Warner recognize authentic enthusiasm for the film and what was false,” says Mendelsohn.

Cyabra works with Brazilian, Colombian and US state governments to track fake narratives leading up to major elections. Mendelsohn notes, “Human analysts have done this work for years, but we can do it at scale with AI.”

In the highly publicized battle between Elon Musk and Twitter, Musk contracted Cyabra to find out how many Twitter users were fake before confirming purchase – their analysis found about 11 percent.

Towards a more authentic internet

Those studying distrust online have found language is important in identifying disinformation. Victoria Rubin, associate professor of information and media studies at University of Western Ontario, has tracked disinformation spreaders for several years and noticed some hallmarks. “Those who lie don’t often reference themselves in posts and prefer to use ‘they’ in language.”

She says fake news and election lies spread fast when people don’t look critically at what they see. It’s common – haven’t we all reacted to a news item after just reading the headline?

Disinformation isn’t going anywhere soon, so to attack it requires greater vigilance. Those I interviewed said education and company responsibility must be part of confronting it. They agree we must teach people how bad actors use new technologies to fool us something’s real when it’s not.

Platforms like Twitter and Facebook must also do as much as possible to rebuild trust, says Truepic’s Brown. “Our big-picture goal is an internet with built-in authenticity infrastructure, so everyone has the right data to make decisions that affect their lives.”

Kaspersky Global Transparency Initiative

How we involve the cybersecurity community and stakeholders in verifying our products’ trustworthiness.

About authors

David Silverberg is a freelance journalist in Toronto who writes for BBC News, Business Insider, The Washington Post, Fodor's, Vice, and many more outlets. He specializes in writing about technology, digital media, business trends, startup culture and science. He is also a published poet and touring theatre artist.