Artificial Intelligence Makes Live Captioning Better

Facebook
Twitter
LinkedIn

Get Our Latest Thinking Right In Your Inbox

​All emails include an unsubscribe link. You may opt-out at any time. See our privacy policy.

Anyone who has ever belted out “Hold me closer, Tony Danza,” instead of Elton John’s actual lyric, “Hold me closer, tiny dancer,” knows the importance of hearing and understanding things clearly.

Poor acoustics in a conference room, a soft-spoken presenter in a lecture hall, or a lot of background noise during a church service can all make it difficult to hear clearly. In those cases, live captioning is a way to make sure everyone knows what is being said, whether they can hear it or not.  

In this upcoming webinar, PSNI partner and assistive listening leader Williams AV will share their expertise about captioning types, the types of markets that use captioning, and the end-user benefits of live captioning. The webinar will also include a live demonstration of Williams AV’s new AI-based live captioning tool, Caption Assist.

Captioning Helps You Understand Even When You Can’t Hear

Simply put, captioning is displaying a text version of spoken words and other sounds. Types of captioning include:

  • Closed captions, which the viewer can control by turning them on and off
  • Open captions, which are a permanent fixture on a video or other program so everyone can see them
  • Live captions, which are created through typing, stenocaptioning, or respeaking as an event or program is taking place

Captions can be used as an alternative solution for assistive listening for individuals who are hearing impaired, but they also have broader applications for educational institutions, houses of worship, corporate offices, and courtrooms where hearing clearly is critical but sound reinforcement is not always possible.

 

Williams AV Uses Artificial Intelligence to Improve Live Captioning

But, as anyone who has watched a live captioned program such as a newscast knows, it is not a perfect system. Live captions can be filled with typos, errors, and “Hold me closer, Tony Danza”-style misinterpretations. In other words, they are subject to human error. But what if it wasn’t humans doing the captioning?

Artificial Intelligence (AI) makes it possible for machines to learn from experience, analyze and learn from data, and then initiate actions or responses that mimic what a human would do. Using this capability, Williams AV has enhanced their captioning capabilities with an AI-based live captioning tool called Caption Assist.

Caption Assist is a real-time open captioning system that can translate up to 27 languages and more than 70 dialects with up to 94 percent accuracy. Powered by the Google Cloud platform, users can access Caption Assist through a smartphone app for easy set up and control.

From video presentations to online training, and webinars to video calls, Caption Assist can enhance learning and engagement in schools, offices, houses of worship, courtrooms and everywhere in between.

For a live demonstration of Caption Assist, and to learn more about how captioning can support your business or organization, register today for our upcoming webinar with Williams AV.

Get Our Latest Thinking Right In Your Inbox

​All emails include an unsubscribe link. You may opt-out at any time. See our privacy policy.

The Alliance Blog

The collective insights of the world’s leading integrators and technology providers

Diego Perez

Chairperson

Country Manager at Newtech

Diego José Pérez has has over 30 years of experience designing and implementing corporate video conferencing networks and services on Microsoft platforms at the top companies and with the most important players in the market.  Since 2016, Diego has served as LATAM General Manager for Newtech Solutions Multimedia SA, a unified communications multimedia technology company. Diego has experience in leadership, planning, marketing and sales with excellent skills in negotiation, management control, strategies and people skills.