New YouTube Auto Captioning Aids Search, Usability, and the Hearing Impaired

By on 12/17/2009 10:40 PM @daisywhitney

MOUNTAIN VIEW, CA — YouTube recently added automatic captioning to its videos, in a move that has far-reaching implications for the deaf and hard-of-hearing, international users, and publishers who seek increase search optimization.

I traveled down to Google’s headquarters yesterday to meet with Ken Harrenstien, a software engineer at Google who helped develop the system.

While YouTube has offered captioning capabilities since 2006, the new features make the process of adding them easier on some videos and also lets creators time up the captions to the spoken words in a video, Harrenstien explained through an interpreter. (His wife Lori served as his interpreter during our visit.)

In addition, the captions can be translated into 51 languages making many English-language videos accessible around the world now.

But there’s also a strong business case to be made for captions because they improve the searchability of videos. “Because captions are text — guess what? You can search them. And at Google we can use that to find out where exactly in a video there is a short snippet,” he explained.

Adding captions can help grow views for publishers, he said. “Hopefully it will make it easier to add captions and I think very soon we will start to notice that it really makes a difference in how many people watch their videos.”

The technology works by linking Google’s automatic speech recognition technology with the YouTube caption system, Harrenstien explained. The automatic captioning is only available on a few channels, including PBS, National Geographic, and Duke University. In time, the goal is to expand that capability to all videos, we are told.

But any YouTube creator can use the automatic timing tool, which lets a user upload text when they post a video. YouTube then matches the spoken words to the written ones creating captions, Harrenstien explained.

From a personal standpoint, Harrenstien said he has dreamed about this technology for a long time. Because he has three young children, he doesn’t have much time to watch videos, but now he said he can understand the ones they are watching.

Here’s a story on the new captioning program by Miguel Helft of The New York Times.

Daisy Whitney, Senior Producer

Update 12/21:  We have created this transcript below and uploaded it to YouTube which created the automated captions in the YouTube video we have posted here.  Came out really well.

Video Transcript

Daisy Whitney:  Hey there, I’m Daisy Whitney reporting for Beet.TV and NBC Bay Area here at Google’s headquarters. Google recently introduced automatic captioning for its videos on YouTube and we’re going to talk about what that means and what it means for users with a software engineer.

So Ken, you have some interesting updates with what’s happening with YouTube and captions. Can you walk us through some of the new products, updates that you have?

Ken Harrenstien:  Sure. Two things we just announced a few weeks ago: 1) automatic captioning and 2) automatic timing. And to me it’s some of the most important things that we’ve done so far. Obviously the captions themselves are very important for many reasons. Would you like me to explain the reasons?

Daisy Whitney: Yes I would.

Ken Harrenstien:
  Sure. Okay. Well obviously we need captions for someone like myself when I watch video. Two, there’s other people who speak a different language but can understand English if they read it or another language, and the captions allow us to translate the language to theirs. So it’s not just access to me, a deaf person, but it’s access for the world. The third, we have people who, for whatever reason, like to have the sound off. Maybe their busy, maybe they’re on a train, on a bus, maybe it’s at night and they don’t want to wake up somebody.

And forth, because captions are text, guess what, you can search on that text. And at Google, we can use that, for example, to find exactly where in the video you have a special snippet and jump straight to that part of the video. And for a short video maybe it doesn’t matter, but for a movie, ooh wow, it really helps a lot. And if you’re searching for a class lecture on physics, maybe you want to skip the boring parts and go straight to what you need to know.

Daisy Whitney:  So it improves searchability on YouTube?

Ken Harrenstien:  Yes. Now, I need to follow up. Two things I want to add. Basically it makes it easier to create captions for videos because we have so much content now. The number is every minute we have 20 hours of more video being uploaded to YouTube. That’s a lot. I mean, a lot of videos are already captioned, but it’s just a very small amount compared to what’s coming in.

So I figured, fortunately the timing is right to hook up a lot of important things at Google and hook them together to make this happen. We linked speech recognition to the technology. Suppose you have a video; it doesn’t yet have captions. The captions are uploaded by the people who own the video. I think it’s easy, but for many people it’s not easy. They have to type the words, they have to find the time, they have to upload the file, so some people don’t bother. But now, if we can recognize the speech in the video, we can generate captions for you. That’s early–it’s only available on a few educational channels and government channels and we’re trying to roll it out as fast as we can.

But the second thing, automatic timing. Suppose okay you’re willing to type in the words for the transcript–that’s all you have to do. You upload that file and we will use speech recognition to find out when those words were spoken and change them into captions. We actually use that technique ourselves at Google for a lot of YouTube videos.

Daisy Whitney:
  So if I upload this video to YouTube, what do I need to do as a user? Do I need to create a transcript and upload that transcript so that there can be captions for this video for instance?

Ken Harrenstien:
  Okay, there’s two things you can do. First, if you were one of the partners for whom this is enabled, then we will do speech recognition for you. It won’t be perfect, but you can take that, fix it up, and upload it as real captions and it will be perfect. Or the second thing we can do. Often you can write down the transcript of what people say in the video, upload that file, and we will do the rest. And I hope you do!

Daisy Whitney:
  Who are your partners right now for whom you do the automatic captioning? And is your goal at some point to do automatic captioning for every video? Is that even possible?

Ken Harrenstien:  Obviously because of our, I mean Google’s, mission statement, we want to make everything accessible to everyone. And we will get there someday, maybe. But for now, the list of partners, we have that on our blog post. We just recently added a lot more. Sorry I forget the list.

Daisy Whitney:
We’ll get it. We’ll look it up. What do you see as the value for publishers and content owners?

Ken Harrenstien:  Hopefully we’ll make it so much easier for them to add captions and then they will. And if they can’t, for whatever reason, we will do the best we can. And I think very soon we will start to notice that it really makes a difference in how many people watch their videos.

Daisy Whitney:  More views. What does it mean for you personally?

Ken Harrenstien: Well I’ve dreamed about this for many years of course. It’s a funny thing, I don’t have a lot of time to watch videos myself because I have three small children, but if they want to watch something, I like to understand what they’re watching too. So it’s great.

Daisy Whitney:
That is great. Thank you so much.

Recent Videos
image
Content Indexing As Important As Traditional Data Collection: Veenome CEO

LAS VEGAS – The content that a viewer chooses to watch may be just as, if not more, important for targeting as traditional demographic data collected about said viewer, says Kevin Lenane, founder and CEO of Veenome. This concept, which Veenome focuses on, is termed the “viewing moment.” By ...

Screen Shot 2015-01-28 at 11.59.12 AM
AOL is Linking Content and Audience, Martinez explains

FORT LAUDERDALE – One of AOL’s goals these days is to help guide ad partners through the complexity of today’s digital marketplace, says Marta Martinez, Global Head of Video Sales, AOL Inc., at the Beet.TV executive retreat this past weekend. “A lot of my role is education and ...

image
Mediaocean Pursues Convergence Tools in 2015

FORT LAUDERDALE –  Media agency software giant Mediaocean has its sights set on convergence tools in 2015, says Cordie DePascale, VP Product & Partner Solutions at Mediaocean, at the Beet.TV executive retreat this past weekend. The company recently struck a partnership with Videology to integrate ...

image
TV Ads Could Be 10% Programmatic By 2017: AOL’s Ackerman

FORT LAUDERDALE — While so-called “programmatic” techniques for controlling, planning and trading online ads have gained traction in display advertising, the concept is slower to reach the same penetration in video and TV. But the prospect is set for wider adoption after a year in which ...

image
Programmatic Is Not Just For Real-Time: Videology’s Gaskamp

FORT LAUDERDALE — So-called “programmatic” ad-trading techniques may have come to market at the real-time bidding (RTB) end of the spectrum, allowing ad buyers and sellers to do business in real-time marketplaces – but that doesn’t mean they have to stay there. In this video ...

image
NBC Universal Launches Audience Targeting for TV

FORT LAUDERDALE –  NBCUniversal is betting big on the marriage of context and content to drive return on investment in ads, says Joe Cady, VP of Advertising at NBC Universal, at the Beet.TV executive retreat this past weekend. Earlier this month, NBC Universal launched an audience targeting platform ...

image
Time Goes Big on Video: Readies Large Manhattan Studio Complex

FOR LAUDERDALE – Time Inc has made an “unequivocal commitment to video,” says J.R. McCabe, SVP for Video, in this interview with Beet.TV To that end, the company is readying multiple studios on “thousands of feet” of soundstage in lower Manhattan.  Time will move into the new ...

image
The Media Revolution is an “Industrial Revolution,” Ashley Swartz

FORT LAUDERDALE – The massive changes in the way that date is powering advertising is transformative, akin to a sort of “industrial revolution,” says Ashley J. Swartz, industry analyst and founder of Furious Corp, in this keynote presentation at the Beet.TV executive retreat this past ...

Andrew Feigenson, Nielsen
Nielsen To Measure Smart TV’s In 2015: Feigenson

FORT LAUDERDALE — The coming digitzation of TV may seem set to pave the way for measuring audiences in a digital manner used to internet media houses – but Nielsen is stating its case for a panel-based approach it seems set to launch later this year. “We’re dedicated to having a solution ...

image
Publishers Should Personalize For ‘The Web Of One’: Taboola CEO

FORT LAUDERDALE — Personalization of web editorial experiences has been talked about for years, with few publishers deciding to let their readers customize their own editions. BBC News last week launched its most personalizable app to date, but content recommendation service Taboola‘s CEO says ...

Screen Shot 2015-01-26 at 5.28.26 PM
Social News Distribution At NYT’s Forefront: Poorsattar

FORT LAUDERDALE — The New York Times wants to make a bigger deal out of getting its stories out to users of social networks. Last year’s leaked NYT Innovation Report flagged challenges with building audience using social, including insufficient use of Twitter by reporters and a Facebook account ...

image
Google’s 2015 Priority Is Securing Ads From Malware

FORT LAUDERDALE – The digital ad industry may be starting to get to grips with the problems of fraud and viewability, but next up is the creeping threat from nefarious code. “The big priority for us is switching everything over to Secure Sockets Layer (SSL),” Google’s head of media ...

loader