AI You Can Use Today

Original article can be found here (source): Artificial Intelligence on Medium

Imagga, Cloudsight and Chooch talk about AI for media licensors

Photo by Christian Wiediger on Unsplash

As a follow-up to a great introductory session on AI at its last conference, the Digital Media Licensing Association hosted a members-only webinar Thursday on Artificial Intelligence You Can Use Today.

We were joined by an expert panel from the visual AI and media fields:

• Georgi Kadrev, CEO, Imagga

• Brad Folkens, CEO, Cloudsight

• Jeffrey Goldsmith, VP of Marketing, Chooch

• Jonathan Wells, Bureau Chief, SIPA USA

• Thomas Smith, CEO, Gado Images

Georgi talked about Imagga’s platform for automatic tagging, custom modeling, identifying colors and other visual aspects of images, content moderation and more. In the past year, he said, the world has generated more photos than in the whole history of analog photography. AI and automatic processing are more important than ever.

Brad talked about Cloudsight’s caption writing capabilities. Relatively unique in the AI industry, Cloudsight’s solution provides a full sentence-length caption for images automatically. It also uses a hybrid intelligence model, in which humans work in conjunction with AI to caption images more accurately.

Jeff described how Chooch builds custom AI solutions to tag images for media clients and clients in other industries. He explained how Chooch’s solution allows companies to tag people in images, add descriptions and answer questions about the specific concepts and actions depicted.

Jonathan shared how SIPA utilizes Chooch to automatically tag people in images from events and other automatic processing capabilities.

Tom shared how Gado Images uses Imagga to suggest tags for the company’s research team to review and uses Cloud Sight to create finding aids for large historical collections.

The presentations led to an active Q&A session:

Custom Training

Q: Can an AI solution be trained to provide output good enough for use on stock media marketplaces?

A: The presenters felt this is possible, either through robust development of custom models or through the inclusion of more human quality assurance.

Q: Can these technologies be applied to video?

A: Yes. The presenters are applying their solutions to video, either by analyzing specific frames, or by looking at actions across multiple frames to understand a video’s semantic content. This area is provided now on a custom basis but is likely to be offered more broadly down the line.

Q: Can location data, existing captions, custom terms and more be woven into the AI responses?

A: All the presenters shared that this can be done. In fact, location data already is used to help their models provide the best possible output.

Legal issues were another concern. The presenters shared that they have robust tracking systems in place to audit sources of data. They are bringing in training data either directly from clients (where it is segregated from other users) or sourcing it from cleared providers. Others in the industry who are not as conscientious about this may be scraping data from unauthorized sources like LinkedIn, and this is a risk our industry must avoid.

The presenters shared demos that participants can use for free to test their services on their own images:

• Imagga: https://imagga.com/auto-tagging-demo

• Cloudsight: http://www.cloudsight.ai

Chooch: https://chooch.ai/demo/

A recording of the webinar will be made available to DMLA members. Join today to receive access if you are not already a member.

For an introduction to any of the presenters, email tom@gadoimages.com.

The DMLA serves visual media licensing professionals who share the common goal of building a stronger and more profitable industry through education, advocacy and connection. For more information about the organization and how it can help your creative business, contact Executive Director Elaine Vitt at elaine@digitalmedialicensing.org or visit Join DMLA.