New YouTube Auto Captioning Aids Search, Usability, and the Hearing Impaired

By on 12/17/2009 10:40 PM @daisywhitney

MOUNTAIN VIEW, CA — YouTube recently added automatic captioning to its videos, in a move that has far-reaching implications for the deaf and hard-of-hearing, international users, and publishers who seek increase search optimization.

I traveled down to Google’s headquarters yesterday to meet with Ken Harrenstien, a software engineer at Google who helped develop the system.

While YouTube has offered captioning capabilities since 2006, the new features make the process of adding them easier on some videos and also lets creators time up the captions to the spoken words in a video, Harrenstien explained through an interpreter. (His wife Lori served as his interpreter during our visit.)

In addition, the captions can be translated into 51 languages making many English-language videos accessible around the world now.

But there’s also a strong business case to be made for captions because they improve the searchability of videos. “Because captions are text — guess what? You can search them. And at Google we can use that to find out where exactly in a video there is a short snippet,” he explained.

Adding captions can help grow views for publishers, he said. “Hopefully it will make it easier to add captions and I think very soon we will start to notice that it really makes a difference in how many people watch their videos.”

The technology works by linking Google’s automatic speech recognition technology with the YouTube caption system, Harrenstien explained. The automatic captioning is only available on a few channels, including PBS, National Geographic, and Duke University. In time, the goal is to expand that capability to all videos, we are told.

But any YouTube creator can use the automatic timing tool, which lets a user upload text when they post a video. YouTube then matches the spoken words to the written ones creating captions, Harrenstien explained.

From a personal standpoint, Harrenstien said he has dreamed about this technology for a long time. Because he has three young children, he doesn’t have much time to watch videos, but now he said he can understand the ones they are watching.

Here’s a story on the new captioning program by Miguel Helft of The New York Times.

Daisy Whitney, Senior Producer

Update 12/21:  We have created this transcript below and uploaded it to YouTube which created the automated captions in the YouTube video we have posted here.  Came out really well.

Video Transcript

Daisy Whitney:  Hey there, I’m Daisy Whitney reporting for Beet.TV and NBC Bay Area here at Google’s headquarters. Google recently introduced automatic captioning for its videos on YouTube and we’re going to talk about what that means and what it means for users with a software engineer.

So Ken, you have some interesting updates with what’s happening with YouTube and captions. Can you walk us through some of the new products, updates that you have?

Ken Harrenstien:  Sure. Two things we just announced a few weeks ago: 1) automatic captioning and 2) automatic timing. And to me it’s some of the most important things that we’ve done so far. Obviously the captions themselves are very important for many reasons. Would you like me to explain the reasons?

Daisy Whitney: Yes I would.

Ken Harrenstien:
  Sure. Okay. Well obviously we need captions for someone like myself when I watch video. Two, there’s other people who speak a different language but can understand English if they read it or another language, and the captions allow us to translate the language to theirs. So it’s not just access to me, a deaf person, but it’s access for the world. The third, we have people who, for whatever reason, like to have the sound off. Maybe their busy, maybe they’re on a train, on a bus, maybe it’s at night and they don’t want to wake up somebody.

And forth, because captions are text, guess what, you can search on that text. And at Google, we can use that, for example, to find exactly where in the video you have a special snippet and jump straight to that part of the video. And for a short video maybe it doesn’t matter, but for a movie, ooh wow, it really helps a lot. And if you’re searching for a class lecture on physics, maybe you want to skip the boring parts and go straight to what you need to know.

Daisy Whitney:  So it improves searchability on YouTube?

Ken Harrenstien:  Yes. Now, I need to follow up. Two things I want to add. Basically it makes it easier to create captions for videos because we have so much content now. The number is every minute we have 20 hours of more video being uploaded to YouTube. That’s a lot. I mean, a lot of videos are already captioned, but it’s just a very small amount compared to what’s coming in.

So I figured, fortunately the timing is right to hook up a lot of important things at Google and hook them together to make this happen. We linked speech recognition to the technology. Suppose you have a video; it doesn’t yet have captions. The captions are uploaded by the people who own the video. I think it’s easy, but for many people it’s not easy. They have to type the words, they have to find the time, they have to upload the file, so some people don’t bother. But now, if we can recognize the speech in the video, we can generate captions for you. That’s early–it’s only available on a few educational channels and government channels and we’re trying to roll it out as fast as we can.

But the second thing, automatic timing. Suppose okay you’re willing to type in the words for the transcript–that’s all you have to do. You upload that file and we will use speech recognition to find out when those words were spoken and change them into captions. We actually use that technique ourselves at Google for a lot of YouTube videos.

Daisy Whitney:
  So if I upload this video to YouTube, what do I need to do as a user? Do I need to create a transcript and upload that transcript so that there can be captions for this video for instance?

Ken Harrenstien:
  Okay, there’s two things you can do. First, if you were one of the partners for whom this is enabled, then we will do speech recognition for you. It won’t be perfect, but you can take that, fix it up, and upload it as real captions and it will be perfect. Or the second thing we can do. Often you can write down the transcript of what people say in the video, upload that file, and we will do the rest. And I hope you do!

Daisy Whitney:
  Who are your partners right now for whom you do the automatic captioning? And is your goal at some point to do automatic captioning for every video? Is that even possible?

Ken Harrenstien:  Obviously because of our, I mean Google’s, mission statement, we want to make everything accessible to everyone. And we will get there someday, maybe. But for now, the list of partners, we have that on our blog post. We just recently added a lot more. Sorry I forget the list.

Daisy Whitney:
We’ll get it. We’ll look it up. What do you see as the value for publishers and content owners?

Ken Harrenstien:  Hopefully we’ll make it so much easier for them to add captions and then they will. And if they can’t, for whatever reason, we will do the best we can. And I think very soon we will start to notice that it really makes a difference in how many people watch their videos.

Daisy Whitney:  More views. What does it mean for you personally?

Ken Harrenstien: Well I’ve dreamed about this for many years of course. It’s a funny thing, I don’t have a lot of time to watch videos myself because I have three small children, but if they want to watch something, I like to understand what they’re watching too. So it’s great.

Daisy Whitney:
That is great. Thank you so much.

Recent Videos
image
BrightRoll’s DoubleClick Google Enables YouTube Programmatic Buys

Programmatic video ad platform BrightRoll is integrating its systems with Google’s DoubleClick Ad Exchange, enabling buying of video ads on Google’s YouTube via “programmatic” methods from the BrightRoll end. “The DoubleClick bid manager has been a buyer on our marketplace for ...

image
Dolby Readies Deeper Color Palette for Video w/ Amazon, Sharp, others

LAS VEGAS — Can Dolby do for eyes what it’s done for ears? The San Francisco-based firm is best known for its audio developments, but recently unveiled Dolby Vision, its system for encoding pictures with more colors, brightness and more dynamic range than the current TV grading standard. ...

image
Viacom’s Spina Wants Unified Social Data Measurements

A single currency for measuring the impact of social interactions with brands would help eliminate confusion in the marketplace, says Viacom Media Networks’ integrated marketing EVP. “There’s a collection of third-party data reporters in addition to first party,” Dario Spina tells ...

image
Video Ads Changing Faster In EU Than US: Videology’s Gaskamp

LAS VEGAS – America’s patchwork TV provider landscape means linear’s move to data-driven digital ad trading is not happening as quickly as in some other parts of the world, says video ad tech vendor Videology. “We’re very complex (in the US), there’s a lot of people ...

image
Group M’s Xaxis Goes Large With TV-Synced Two-Screen Ads

LONDON — More advertisers may soon buy synchronized video ads on multiple digital screens to continue reaching TV viewers distracted by secondary devices. Group M’s data and audience unit Xaxis has already operated the scheme, dubbed Xaxis Sync, in the Netherlands, buying social mobile ads ...

image
Buying Video Traffic Is Not A Four-Letter Word: Taboola’s Silberstein

Publishers and brands spend a lot of money trying to draw audiences from other sites and services, whether through social awareness-raising or through marketing efforts. But what audiences do on your site depends very much on where they came from, says an exec from a content discovery engine. ...

image
A Revolutionary Apple TV? Don’t Expect too Much, IDC Analyst Greg Ireland

LAS VEGAS – Over the past four  years,  speculation has been rampant about the entrance of a new Apple hardware/software solution that will revolutionize the IPTV and gaming businesses. But it’s not going to happen, cautions industry analyst Greg Ireland, Research Manager at IDC, the this ...

image
Ericsson Taps Elemental Technologies for “Virtualized Encoder”

LAS VEGAS  –   As part of Ericsson’s expansion into cloud-based video encoding with its “Virtualized Encoder Solution,” it has tapped Portland, Oregon-based Elemental Technologies, the companies announced at the NAB Show. We sat down with Ericsson’s Giles Wilson, Head of TV ...

image
Qualcomm is Shipping MPEG Dash Enabled Chips on All Android Devices

LAS VEGAS –  Semiconductor manufacturer Qualcomm is now shipping all Android devices with MPEG Dash implementations — and some phone manufacturers (OEM’s) are enabling the new video streaming standard, says Aytac Biber, Video Technology Product Manager, in this interview with Beet.TV Aytac ...

image
Ooyala Taps Microsoft Azure for Cloud Solutions

LAS VEGAS – Ooyala, the digital video services company, which has been expanding its offering as a “TV Everywhere” platform for big broadcasters including Univision, has entered an agreement with Microsoft for its Azure, cloud-based platform, explains Jonathan Wilner, VP for Products, in ...

image
Microsoft Unveils Skype TX, a Studio-Grade Solution for Broadcasters

LAS VEGAS – As television, digital, radio and even live events come together, Skype has found a place in the middle of that convergence, connecting people worldwide via live television and digital, says Angie Hill, general manager of consumer marketing at Skype, in this interview with Beet.TV at the 2014 ...

image
UGC-Based Broadcast is #1 New Syndicated Show in U.S.

LAS VEGAS – The manpower and resources behind local news have shrunk but the time frames have not, leading to a lot of repetition in news programs, says Phil Alvidrez, executive in charge of production for RightThisMinute, a TV show that publishes online video content – and tells the story behind it – ...

loader