Verity is Keeping the Media Honest by fighting fake news and disinformation

Original article can be found here (source): Artificial Intelligence on Medium

Verity is Keeping the Media Honest by fighting fake news and disinformation

“Machines can’t tell the ‘accuracy’ of content. Our systems rely instead on signals we find align with relevancy of topic and authority.” — Danny Sullivan, Google search liaison, 9.29.2019

Verity democratizes access to credible information by using AI and machine-learning algorithms for search and evaluation, because credible information is the foundation of shared understanding. Verity, a news search and content evaluation platform today announced the sneak peek at their content evaluation technology with the release of its COVID-19 Credibility Tracker which will produce a weekly round-up of COVID-19 articles with their associated Verity Scores making it easier for users to know what’s credible and what isn’t.

What’s the background on Verity?

In March 2019, NTENT CEO Pat Condo and Credibility CEO Mark Young were introduced by a shared investor. Pat’s company was designing a new way to tackle search, one that relied on artificial intelligence and could leverage voice. Mark’s company was developing an automated and scalable platform to assess the accuracy and bias of news media, news outlets, and journalists using artificial intelligence and machine learning. Credibility’s technology is unbiased, scientific, and empowering.

Pat has founded several search technology companies over the last 25 years including; NTENT, Inc., Vertical Search Works, Convera Corporation and Excalibur Technologies. Pat was also CEO of Convera and Excalibur; both companies were listed on NASDAQ. Earlier in his career has also included senior executive positions with Digital Equipment Corporation, Harris Semiconductor and Northrop Aircraft.

Mark formed Credibility after spending three decades in the U.S. military and in senior positions in the Intelligence Community. He served in several senior roles at U.S. Cyber Command, the National Security Agency, and the Permanent Select Committee on Intelligence at the U.S House of Representatives, where he worked extensively on national security issues, cyberwarfare operations, and disinformation countermeasures. He has extensive experience in Information Operations, the integration of information-related capabilities to influence, disrupt, corrupt, or usurp the decision making of adversaries.

Both executives recognized that the current state of digital content production and distribution was problematic. Throughout 2019, news outlets were filing stories every week about the target disinformation campaign at work in the 2016 U.S. presidential election. How would the operatives behind those attacks target the 2020 elections? At the same time, refugees from Latin America were crossing the southern U.S. border — who these people are was largely defined by which publication you were reading. They could be criminals or victims depending on your news outlet of choice.

The result is Verity, a joint venture that brings Credibility scoring to the NTENT platform.

How does it work?

First, the Verity platform uses natural language process to map and highlight any indications of bias or error in the content. More than [4,000] news sources are indexed for scoring. Next, the proprietary artificial intelligence engine then analyzes those indications and scores the content in real time. The engine uses 56 different algorithms to assess indications, covering a wide range of bias or error including among others:

· Title Representativeness

· Hyperbolic Language

· Quotes from Outside Experts

· Citation of Organizations and Studies

· Reputation of Citations

Finally, the engine generates a score, which can be applied to the content itself, the site on which it appears, and when available the author of the content. Those scores are stored in the Insight Library, which can then be referenced for future queries. This assessment process allows users to evaluate a wide range of content areas with mathematically based objectivity and use the Credibility Score to know how much they can trust what they’re reading.

While other companies provided assessment of content based on political leaning or factualness, their systems all share the same flaw — a reliance on people. Because groups such as Politifact, Snopes, and FactCheck.org use people to score content, whatever their biases, known or unknown, become part of the evaluation process. Since you don’t know who is scoring what and what that person’s biases may be, those biases can’t be understood. The same piece of content could have two different scores because two different people read it differently — that’s just human nature. Moreover, because people aren’t inherently scalable, the amount of content that can be scored is limited as a function by the ability of people to read and analyze it.

Verity adds credibility to the widest range of search results so you know to trust what you read.