New YouTube Auto Captioning Aids Search, Usability, and the Hearing Impaired

By on 12/17/2009 10:40 PM @daisywhitney

MOUNTAIN VIEW, CA — YouTube recently added automatic captioning to its videos, in a move that has far-reaching implications for the deaf and hard-of-hearing, international users, and publishers who seek increase search optimization.

I traveled down to Google’s headquarters yesterday to meet with Ken Harrenstien, a software engineer at Google who helped develop the system.

While YouTube has offered captioning capabilities since 2006, the new features make the process of adding them easier on some videos and also lets creators time up the captions to the spoken words in a video, Harrenstien explained through an interpreter. (His wife Lori served as his interpreter during our visit.)

In addition, the captions can be translated into 51 languages making many English-language videos accessible around the world now.

But there’s also a strong business case to be made for captions because they improve the searchability of videos. “Because captions are text — guess what? You can search them. And at Google we can use that to find out where exactly in a video there is a short snippet,” he explained.

Adding captions can help grow views for publishers, he said. “Hopefully it will make it easier to add captions and I think very soon we will start to notice that it really makes a difference in how many people watch their videos.”

The technology works by linking Google’s automatic speech recognition technology with the YouTube caption system, Harrenstien explained. The automatic captioning is only available on a few channels, including PBS, National Geographic, and Duke University. In time, the goal is to expand that capability to all videos, we are told.

But any YouTube creator can use the automatic timing tool, which lets a user upload text when they post a video. YouTube then matches the spoken words to the written ones creating captions, Harrenstien explained.

From a personal standpoint, Harrenstien said he has dreamed about this technology for a long time. Because he has three young children, he doesn’t have much time to watch videos, but now he said he can understand the ones they are watching.

Here’s a story on the new captioning program by Miguel Helft of The New York Times.

Daisy Whitney, Senior Producer

Update 12/21:  We have created this transcript below and uploaded it to YouTube which created the automated captions in the YouTube video we have posted here.  Came out really well.

Video Transcript

Daisy Whitney:  Hey there, I’m Daisy Whitney reporting for Beet.TV and NBC Bay Area here at Google’s headquarters. Google recently introduced automatic captioning for its videos on YouTube and we’re going to talk about what that means and what it means for users with a software engineer.

So Ken, you have some interesting updates with what’s happening with YouTube and captions. Can you walk us through some of the new products, updates that you have?

Ken Harrenstien:  Sure. Two things we just announced a few weeks ago: 1) automatic captioning and 2) automatic timing. And to me it’s some of the most important things that we’ve done so far. Obviously the captions themselves are very important for many reasons. Would you like me to explain the reasons?

Daisy Whitney: Yes I would.

Ken Harrenstien:
  Sure. Okay. Well obviously we need captions for someone like myself when I watch video. Two, there’s other people who speak a different language but can understand English if they read it or another language, and the captions allow us to translate the language to theirs. So it’s not just access to me, a deaf person, but it’s access for the world. The third, we have people who, for whatever reason, like to have the sound off. Maybe their busy, maybe they’re on a train, on a bus, maybe it’s at night and they don’t want to wake up somebody.

And forth, because captions are text, guess what, you can search on that text. And at Google, we can use that, for example, to find exactly where in the video you have a special snippet and jump straight to that part of the video. And for a short video maybe it doesn’t matter, but for a movie, ooh wow, it really helps a lot. And if you’re searching for a class lecture on physics, maybe you want to skip the boring parts and go straight to what you need to know.

Daisy Whitney:  So it improves searchability on YouTube?

Ken Harrenstien:  Yes. Now, I need to follow up. Two things I want to add. Basically it makes it easier to create captions for videos because we have so much content now. The number is every minute we have 20 hours of more video being uploaded to YouTube. That’s a lot. I mean, a lot of videos are already captioned, but it’s just a very small amount compared to what’s coming in.

So I figured, fortunately the timing is right to hook up a lot of important things at Google and hook them together to make this happen. We linked speech recognition to the technology. Suppose you have a video; it doesn’t yet have captions. The captions are uploaded by the people who own the video. I think it’s easy, but for many people it’s not easy. They have to type the words, they have to find the time, they have to upload the file, so some people don’t bother. But now, if we can recognize the speech in the video, we can generate captions for you. That’s early–it’s only available on a few educational channels and government channels and we’re trying to roll it out as fast as we can.

But the second thing, automatic timing. Suppose okay you’re willing to type in the words for the transcript–that’s all you have to do. You upload that file and we will use speech recognition to find out when those words were spoken and change them into captions. We actually use that technique ourselves at Google for a lot of YouTube videos.

Daisy Whitney:
  So if I upload this video to YouTube, what do I need to do as a user? Do I need to create a transcript and upload that transcript so that there can be captions for this video for instance?

Ken Harrenstien:
  Okay, there’s two things you can do. First, if you were one of the partners for whom this is enabled, then we will do speech recognition for you. It won’t be perfect, but you can take that, fix it up, and upload it as real captions and it will be perfect. Or the second thing we can do. Often you can write down the transcript of what people say in the video, upload that file, and we will do the rest. And I hope you do!

Daisy Whitney:
  Who are your partners right now for whom you do the automatic captioning? And is your goal at some point to do automatic captioning for every video? Is that even possible?

Ken Harrenstien:  Obviously because of our, I mean Google’s, mission statement, we want to make everything accessible to everyone. And we will get there someday, maybe. But for now, the list of partners, we have that on our blog post. We just recently added a lot more. Sorry I forget the list.

Daisy Whitney:
We’ll get it. We’ll look it up. What do you see as the value for publishers and content owners?

Ken Harrenstien:  Hopefully we’ll make it so much easier for them to add captions and then they will. And if they can’t, for whatever reason, we will do the best we can. And I think very soon we will start to notice that it really makes a difference in how many people watch their videos.

Daisy Whitney:  More views. What does it mean for you personally?

Ken Harrenstien: Well I’ve dreamed about this for many years of course. It’s a funny thing, I don’t have a lot of time to watch videos myself because I have three small children, but if they want to watch something, I like to understand what they’re watching too. So it’s great.

Daisy Whitney:
That is great. Thank you so much.

Recent Videos
Screen Shot 2015-04-27 at 12.41.20 PM
Programmatic TV Going From Seed To Roadmap: Havas’ Keller

It’s taken some time to convince them, but advertisers are now on-board and enthusiastic for the various ways “programmatic” ad-buying can help them better target and automate their messages. Havas Media channel investments EVP Melissa Keller says her agency has gone from ...

Screen Shot 2015-04-27 at 12.41.51 PM
Programmatic TV Challenges: Transparency And Management, Execs Say

Slowly but surely, television is opening up to the “programmatic” ad sales revolution that has happened in online display and video. What are some of the biggest inhibitors? One factor is lack of a common, open format for audience targeting data, says Havas Media channel investments EVP Melissa ...

image
News Corp’s MCN Using Programmatic To Sell TV Ads with AOL in Australia

Australia is one of the world’s leading online advertising markets. So how will the year ahead play out in the nascent field of “programmatic” TV advertising? TV companies have to get onboard or risk being left behind, says one sector exec. “2015 will be the year many of them realize ...

image
BBC Will Push Device Makers To MPEG-DASH: Unified Streaming CEO

LAS VEGAS — Can one broadcaster move the entire consumer electronics manufacturing industry? Perhaps when that broadcaster is as powerful as the BBC is in the UK. The corporation’s support for MPEG-DASH, an emerging media streaming standard that supports adaptive bitrate streaming over standard ...

image
MPEG-DASH Moving Toward Deployment: Wowza’s Knowlton

LAS VEGAS — MPEG-DASH, the digital video standard that uses adaptive bitrate streaming to offer higher-quality video and audio with encryption and cross-platform capability, is increasingly being looked to by video content owners searching for an alternative, says a video server vendor. “DASH ...

image
DASH And HLS Go Hand-In-Hand, For Now: Verimatrix’s Christian

LAS VEGAS — Take two bottles in to the shower? Today, two online video standards are in the frame for adoption – but, in time, one may be victorious. The MPEG-DASH standard, which can use adaptive-bitrate streaming to scale quality regardless of connection speed over standard HTTP servers, is ...

image
Immersive Audio Will Light Up MPEG-DASH: DTS’ Skaaden

LAS VEGAS — The MPEG-DASH video standard is bringing higher-quality video and audio to digital media by using adaptive bitrate streaming. But the improvements will keep coming on top of that platform, says one of the leading digital audio outfits. “It’s the right format for industry partners to ...

image
Microsoft Dashes To MPEG-DASH Support: Sodagar

LAS VEGAS — DASH by name, dash by nature? The draft international standard for MPEG-DASH, the video standard bringing higher-quality video and audio to digital media by using adaptive bitrate streaming, was first specced out in 2011. Now it is fast being adopted by big-name video server vendors. ...

image
Silverlight’s Chrome Exit Will Spur MPEG-DASH: Bitmovin’s Lederer

LAS VEGAS — Google’s recent decision to end Chrome browser support for Netscape Plugin Application Programming Interface and, with it, for Microsoft’s Silverlight video format, will spur adoption of fast-growing rival format MPEG-DASH, says one online video boss. “We will see a lot ...

image
Akamai Touts Its Video Player

LAS VEGAS – Widely known as a global content delivery network, Akamai has been evolve its media offering to include a video player which is being used by some of the biggest online video publishers, explains Frank Paolino who heads the video player development at Akamai. The player, which was launched ...

image
Yahoo Has Data from 1.5 Billion Mobile Users via Flurry Integration

Consumption of video is moving increasingly to mobile apps and Yahoo can provide brands with deep targeting data around 1.5  billion global users, now that the integration with mobile analytics firm Flurry is complete, says Andrew Snyder, VP of Video Sales at Yahoo, in this interview with Beet.TV Yahoo ...

image
Yahoo’s Big Data is Now “Lit Up” on Brightroll

LAS VEGAS –  Following its acquisition by Yahoo late last year, programmatic video adtech company Brightroll is now using data from Yahoo’s one billion global users, says Brent Horowitz, VP of Business Development at Brightroll, in this interview with Beet.TV. He says that the merger has also ...

loader