I am doing a PhD researching sign language accessible sports-related TV programming. There are two main types of programming. The first is sign-presented programming. Sign-presented programmes are usually created, directed and presented by deaf people, and sign language is used throughout. There is also mainstream sports programming, created for the population in general, and in the few cases that this is made accessible via sign language it is done by the application of an in-vision sign language interpreter. It is this that I am particularly interested in. For example – How much of the commentary should be interpreted? If there is too much does this make the programme harder to access rather than easier? What are the norms for selection of source text to interpret? How can coherence be best maintained between the video footage and the interpreted information?
I am also interested in exploring new technology and its application to interpreted sports programmes. For example, the use of Augmented Reality (AR) glasses to enable the interpreter to be positioned in the space in front of the screen. I’d like to understand whether or not this improves the experience for the viewer. Also in an environment where AR is used to position information all around the screen or in front of the viewer, and the viewer is looking around within this environment, then is the best experience achieved by maintaining the interpreter in a static position, or moving them in accordance with the viewers direction of gaze?