Spectrum Labs raises $32M for AI-based content moderation that monitors billions of conversations daily for toxicity

Spectrum Labs raises $32M for AI-based content moderation that monitors billions of conversations daily for toxicity

Two years into the pandemic, online conversations are for many of us still the primary interactions that we are having every day, and we are collectively having billions of them. But as many of us have discovered, not all of those are squeaky clean, positive experiences. Today, a startup called Spectrum Labs — which provides artificial intelligence technology to platform providers to detect and shut down toxic exchanges in real time (specifically, 20 milliseconds or less) — is announcing $32 million in funding. It plans to use the money to continue investing in its technology to double down on its growing consumer business and to forge ahead in a new area, providing services to enterprises for their internal and customer-facing conversations, providing not just a way to help detect when toxicity is creeping into exchanges, but to provide an audit trail for the activity for wider trust and safety tracking and initiatives.

“We aspire to be the leaders in language where civility matters,” CEO Justin Davis said in an interview.

The round is being led by Intel Capital, with Munich Re Ventures, Gaingels, OurCrowd, Harris Barton, and previous backers Wing Venture Capital, Greycroft, Ridge Ventures, Super{set}, and Global Founders Capital also participating. Greycroft led Spectrum’s previous round of $10 million in September 2020 , and the company has now raised $46 million in total.

Davis, who co-founded the company with Josh Newman (the CTO), said Spectrum Labs is not disclosing valuation, but the company’s business size today speaks to how it’s been doing.

Spectrum Labs today works with just over 20 big platforms — they include social networking companies Pinterest and The Meet Group, dating site Grindr, Jimmy Wales’ entertainment wiki Fandom, Riot Games, and e-learning platform Udemy — which in turn have millions of customers sending billions of messages to each other day, either in open chat rooms or in more direct, private conversations.

Its technology is based around natural language and works in real time both on text-based interactions and audio interactions.

Davis notes that its audio work is “read” as audio, not transcribed to text first, which gives Spectrum’s customers a significant jump on responding to the activity, and counteracting what Davis referred to as “The Wild West nature of voice,” due to how slow responses typically are for those not using Spectrum’s technology: a platform has to wait for users to flag iffy content, then the platform has to find that audio in the transcriptions, and then it can take action — a process that could take days.

This is all the more important since voice-based services — with the rise not just of podcasting but services like Clubhouse and Spaces on Twitter — are growing in popularity.

Whether text or audio, Spectrum scans these exchanges for toxic content covering more than 40 behavior profiles that it built initially in consultation with researchers and academics around the world and continues to hone as it ingests more data from across the web. The profiles cover parameters like harassment, hate speech, violent extremism, scams, grooming, illegal solicitation and doxxing. It currently supports scanning in nearly 40 languages, Davis tells me, adding that it could work with any language, , although Davis tells me that there is no language limit.

“We can technically cover any language in a matter of weeks,” he said.

The most visible examples of online toxicity have been in the consumer sphere — where they have played out in open-forum and more private online bullying and hate speech and other illegal activity, an area where Spectrum Labs will continue to do work and invest in technology to detect ever more complicated and sophisticated approaches from bad actors. One focus for Spectrum Labs will be in working on ways to improve how customers themselves can also play a role in deciding what they do and definitely do not want to see, alongside controls and tools for a platform’s trust and safety team. This is a tricky area, and arguably one reason why toxicity has gotten out of hand is because traditionally platforms have wanted to take a hands-off, free speech approach and not meddle in content, since the other side of the coin is that they can also be accused of censoring, a debate that is still very much playing out today.

“There is a natural tension between what the policy implements and what users want and are willing to accept,” Davis said. His company’s view is that the job of a platform “is keeping the worst of the worst off, but also to provide consumer […]

source Spectrum Labs raises $32M for AI-based content moderation that monitors billions of conversations daily for toxicity

Leave a Reply