Captioning and Subtitles

Front page of the video - deep yellow background with white text and an Auslan interpreter is standing readyWhat is the difference between captions and subtitles? This is a common question. Captioning can be done live as people speak, or it can be added to recorded video. Subtitles are translations from another language. What’s interesting is that most people can read captions and subtitles quite quickly. 

The Australian Government has produced an interesting video showing how captioning is done. It is a behind the scenes look and captioners tell how they do it. You can see them at their desks in action. One point of interest is that programs made overseas often have captions, but they don’t always come with the program when a network buys it.

Intellectual property rights become problematic and in the end it is often quicker and cheaper to re-do the captions here in Australia. So that might account for why SBS is more likely to have uncaptioned programs than some other networks – unless they are subtitled of course. There is a second video showing how to turn captions on

a desk with two computer screens and subtitles at the bottomSubtitles

So how fast should subtitles be shown? It seems most of us can read subtitles more quickly than first thought. Recent research revealed that the golden standard of the six second rule doesn’t have any (traceable) evidence to back it up. Now that we know people watch audio-visual materials more frequently with subtitles and captions, this is an important topic – what is the optimum speed? A study from Europe found it isn’t one-size-fits all. They found that viewers can keep up with fast subtitles and that slow speeds can actually be annoying. However, future research needs to include a wider range of people with different levels of reading skill. The title of the paper is, Viewers can keep up with fast subtitles: Evidence from eye movements. 

Editor’s Note: Some set-ups allow you to increase the speed of the video and still read the captions. I can get all the content of a ten minute video in five to seven minutes this way. 

AI for captioning

A speaker stands at a lectern and captioning sceen is behind his right shoulderArtificial Intelligence (AI) can take captioning to another level claims Microsoft. AI for automatic speech recognition removes the need for a human captioner for lectures in universities, and elsewhere. The Microsoft AI blog article and video below focuses on deaf students, but as more people take to captioning on their phones, it could make like easier for everyone. We already know that captioning helps all students by adding another layer of communication and this point is made in the article. The captioning is turned into transcripts and students have a reference to read after the lecture. They can also have the lecture automatically translated into several languages. This is a detailed article and covers automatic speech recognition, translations, and a growing demand for accessibility. This technology is not expected to take over from Auslan or ASL as they are languages in their own right. However, this is another example of how technology is helping humans by taking over from humans and bringing the advantages to more people.