Imagine meeting someone whose language you don’t share, and understanding each other anyway, in real time, through a pair of glasses.
UMCS — Understanding Multilingual Communication Spaces — is a five-year UK–Japan research collaboration funded by UKRI (EPSRC) and JST, bringing together teams from Heriot-Watt University, the University of Surrey, UCL, the National Institute of Informatics, the University of Tokyo, and Kyoto University.
The goal: real-time AI translation between British Sign Language, Japanese Sign Language, English, and Japanese — delivered through lightweight augmented reality glasses that people can wear in everyday conversation. AI is already being developed for sign languages — avatars, translation tools, recognition systems. But real-time, natural conversation across signed and spoken languages remains one of the hardest unsolved problems in the field.
The project will record and analyse natural BSL and JSL conversations, model how people take turns and adapt across languages, develop AI that can both understand and produce signed language in natural conversations, and build AR interfaces so that translation appears naturally within the flow of real conversation.

Prof. Annelies Kusters brings expertise in semiotic repertoire and calibration — how multilingual signers adapt in real time — and studies how AR changes the way people communicate in deaf–hearing interaction.
Dr Robert Adam leads cross-linguistic modelling and studies how AR supports BSL and JSL learning and education.
The outputs will be open: datasets, AI models, educational tools, and AR frameworks, freely available for researchers and educators worldwide.








