Libras (Lingua Brasileira de Sinaisis) is the acronym for the Brazilian version of sign language for their deaf community. Libras is an official language of Brazil, used by a segment of the population estimated at 5%. The various technology tools to fulfil the sign language requirements are part of the evolving accessibility landscape. In this case, as often has happened, an entrepreneur devising a cell phone app was first to market.
The option of using cell phones seems like a logical choice at first glance, but there are several problems with their use in a dark cinema theatre. They have never been found acceptible for other in-theatre uses, and this use case is no exception. The light that they emit is not designed to be restricted to just that one audience member (as the device to the right does), so it isn’t just a bother for the people in the immediate vacinity – it actually decreases perceived screen contrast for anyone getting a dose of the phone’s light. Cell phones also don’t handle the script securely, which is a requirement of the studios which are obligated to protect the copyrights of the artists whose work they are distributing. And, of course, call phone all have a camera pointing at the screen – a big no-no for the same reason, but in spades.
But, the fact is, there are problems with all the various accessibility equipment offerings.
Accessibility equipment users generally don’t give 5 Stars for the choices they’ve been given, for many and varied reasons. Some of the technology – such as the tool to the right – requires constant re-focusing back-and-forth from the distant screen to the close foreground words illuminated in the special box mounted on a bendable stem that mounts in the seat’s cupholder. Another choice – somewhat better – is a pair of specialized glasses that present the words seemingly in mid-air with a choice of distance. While these are easier on the eyes if one holds their head in a single position, the words move around as one moves their head. Laughter causes the words to bounce. Words go sideways and in front of the action if you place your head on your neighbor’s shoulder.
[That is just the start of a litany of creditible issues, perhaps to be reviewed in another article. It isn’t only a one-sided issue – the equipment is expensive to buy, losses are often disproportionate for exhibitors, and the amount of income derived doesn’t support continuous development of new ideas for manufacturers.)]
These (and other) technology solutions are often considered to be attempts to avoid the most simple alternative. Putting the words on the screen in what is called “Open Caption”. OC is the absolute favorite of the accessibility audience. Secure, pristine, on the same focal plane, and importantly, all audiences are treated the same, dragging around no special equipment…but since words on screen haven’t been widely used since shortly after ‘talkies’ became common, the general audience aren’t used to them and many fear they would vehemently object. Attempts to schedule special open screening times haven’t worked in the past for various reasons.
And while open caption might be the first choice for many, it isn’t necessarily the best choice for a child, for example. Imagine the child who has probably been trained in sign language longer than s/he has been learning to read, and who certainly can’t read as fast as those words going by of the new Incredibles movie. But signing, probably better.
Sign language has been done for years on stage, alongside public servants during announcements, and on screen. So in the cinema it is the next logical step. And just in time, as the studios and manufacturing technolgy teams are able to jump on the project when many new enabling components are now available and tested and able to be integrated into new solutions.
These include recently designed and documented sychronization tools that have gone through the SMTPE and ISO processes, which work well with the newly refined SMPTE compliant DCP (now shipping!, nearly worldwide – yet another story to be written.) These help make the security and packaging concerns of a new datastream more easily addressible within the existing standardized workflows. The question started as ‘how to get a new video stream into the package?’, and the choice was made to include that stream as a portion of the audio stream.
There is history in using some of the 8 AES pairs for non-audio purposes (motion seating data, for example). And there are several good reasons for using an available, heretofore unused channel of a partly filled audio pair. Although the enforcement date has been moved back by the Brazilian Normalization group, the technology has progressed such that the main facilitator of movies for the studios, Deluxe, has announced their capability of handling this solution. The ISDCF has a Technical Document in development and under consideration which should help others, and smooth introduction worldwide. [See: ISDCF Document 13 – Sign Language Video Encoding for Digital Cinema (a document under development) on the ISDCF Technical Documents web page.]
One major question remains. Where is the picture derived from? The choices are, 1) to have a person do the signing, or 2) to use the cute emoticon-style of the computer-derived avatar.
The degree of nuance in signing is very well explained, with interesting and excellent allegory to music and other art, by the artist Christine Sun Kim in the following TED talk. She shows, as do the other links, that there is a lot of nuance conveyed by the entire signing body to get ideas across. Shouldn’t be surprising, since we know that very similar nuance is delivered and received with spoken word by body contrived tools such as tone, emphasis and inflections, nuance which isn’t transmitted well in written language. And similarly as we witness with Siri and Alexa, avatars transmit a very limited set of these nuance.
The realities of post production budgets and movie release times and other delivery issues get involved. The worst case is the day and date release which doesn’t get locked product until days before the release. This compresses the amount of time that it takes to get translations and captioning done to ‘beyond belief’ short. Signing translated from foriegn languages like Brazilian would then rely upon the translations. Fortunately, some of these packages can be sent after the main package and joined at the cinema, but either way the potential points of failure increase. Point being, getting a translation and letting an automated avatar program do the work may be the only way to get the product completed in a short amount of time, or within the budget of a documentary or other small budget project.
So, workflow sortig out. Delivery mechanisms are still a work in progress. Whether there will be more pushes for this technology from other countries is a complete unknown. There are approximately 300 different sign languages in use around the world, including International Sign which is used at international gatherings. There are a lot of kids who can’t read subtitles, open or closed. Would they be better off seeing movies with their friends or waiting until the streaming release at home?