Trends
Martin Curzio, Co-founder and CEO of ELdeS delves into the democratization of sign language education through the leveraging of AI, aiming to bridge communication gaps for the deaf community globally. He envisions a collaboration between AI, linguists, and motion detection specialists in a future where human-machine symbiosis enhances inclusivity, potentially ending centuries of deaf exclusion from society.
In this report, we explore how artificial intelligence (AI) can be used to massify teaching and access to sign language (SL), breaking down the communication barriers faced by deaf people and enabling them to fully participate in society. 466 million people in the world are deaf. 13,200 deaf children are born each day (1/29), who, from that moment on, are at risk of social exclusion due to lack of access to SL and communication with the hearing society.
Given the significant problem of a lack of inclusion, an unprecedented opportunity presents itself hand-in-hand with technology: massifying the teaching of and access to Sign Language, through the use of AI. In response to these challenges, platforms such as ELdeS are beginning to emerge, which use AI to massify teaching and access to SL. The goal is to enable widespread, interactive, and self-paced teaching of SL, providing students with real-time feedback on their signing as they interact with tutorial videos. Localization is always important, but in the case of SL, it is necessary as SL is specific to each country, rooted in local culture.
What will happen to traditional educational systems?
Collaborative Work Vs Competition
Sign Language teachers will still be able to teach in a traditional way; these systems are not here to replace but to reinforce and complement. However, there will also be teachers who can not only teach individuals but also teach AI systems, as SL is a living language in constant change, adaptation, and development. Technology provides new job opportunities in this case for deaf individuals who are currently SL teachers.
The impact of these technologies goes beyond the educational sphere, positively affecting society as a whole. For example, ELdeS has sparked exponential growth in the adoption of SL in Uruguay, tripling the number of people with access to SL nationwide in one year. This is attributed to the technology’s ability to implement teaching on a large scale and interactively, both in educational and business environments, promoting awareness and an appreciation of cultural and linguistic diversity. This is what makes us human, ultimately: diversity and the ability to appreciate it.
This is what makes us human, ultimately: diversity and the ability to appreciate it.
Teaching with AI as the first step to interpretation with AI
Technology also provides us the possibility of developing an AI engine capable of identifying complex sentences through Sign Language Processing (SLP) and thus creating a real-time interpreter accessible to anyone with a cell phone, for example.
A strategy to achieve this could be the use of SL educational platforms, capable of detecting independent signs and basic sentences from various countries, to be the basis upon which information (videos and photos labeling visual-gestural modality, spatial coherence, and simultaneity) of various sign languages is collected and stored to create the first database (DB) of sign languages from around the world.
However, the effective implementation of these technologies faces significant technical challenges, such as adapting to linguistic and cultural variations in SL, as well as requiring a precise understanding of context and non-verbal expressions, for accurate interpretation.
It must be taken into account that the grammatical structure of sign language can differ from the spoken language in its own country. For example, in Uruguayan Sign Language (LSU), the grammatical order of a sentence is “Pronoun + Object + Verb,” while in Spanish it is “Pronoun + Verb + Object.” Likewise, information about verb tense or morphemes is not provided with the “sign” of the verb or the object, for example, but rather this information is complemented through context and body position.
If we were to translate the sentence, “Yesterday I visited my sister,” without taking into account or identifying these mentioned factors in the interpretation system, it would result in “I sister visit.” Furthermore, verbs like “to be” or “to use” are not included in the signs within the grammatical structure of the sentence in LSU because they are already included in the expression.
This means that in order to achieve a sign language interpreter that works correctly, the AI system based on SLP must be able to recognize signs and understand contextual variables, body position, and facial expressions as a whole.
Symbiotic Connections
Imminent’s Annual Report 2024
A journey through neuroscience, localization, technology, language, and research. An essential resource for leaders and a powerful tool for going deeper in knowing and understanding the perceived trade-off between artificial intelligence and humans and on their respective role in designing socio-technical systems.
Secure your copy now!Could SLP, Large Language Models (LLMs), and Machine Translation (MT) work together?
Continuing with the philosophy of collaborative work to achieve disruptive results, Machine Translation (MT) based on Large Language Models (LLMs) could work together so that a deaf person not only stops being excluded from their society but can also communicate with people from anywhere in the world. Automatic translation would allow the interpretation of an individual’s sign language from the USA to be translated instantly into Italian if they are traveling to Rome, for example.
This achievement would only be possible through interdisciplinary collaboration between AI experts, motion detection specialists, MT, and linguists.
While it would require several specialized actors or teams to carry out this joint and coordinated task, let us not forget that it is something that a sign language interpreter, with knowledge of various languages, could do on their own. With this, we can highlight the capacity of humans, many of whom are currently “competing” against AI. When in reality, if efforts are combined, a human + machine (learning) symbiosis could be capable of achieving these results on a large scale and from any SL to any spoken language (and vice versa).
When in reality, if efforts are combined, a human + machine (learning) symbiosis could be capable of achieving these results on a large scale and from any SL to any spoken language (and vice versa).
If the aforementioned factors were taken into account and SL interpretation with AI was achieved, not only could sign language be translated in real-time into any language, but sign language could also be used to enrich current LLM systems and vice versa.
The symbiosis between humans and machines, through AI, has the potential to transform the lives of deaf people. Collaborative efforts among different actors are required to achieve a significant impact and build a more inclusive future for all.
In this way, thousands of years of excluding deaf people from society can end, now enabling them to be part of the globalized world, not just their specific community.
Photo Credit: Dynamic Wang – Unsplash
Martín Curzio
TEDx Speaker | CoFundador & CEO de ELdeS | Alianzas estratégicas
Martin Curzio, a 36-year-old architect, is the Co-founder and CEO of ELdeS. In 2021 he founded ELdeS with his brother Fabián, the world's first platform to enable the teaching of Sign Language in a widespread and interactive way thanks to AI. They both have low vision, so they know firsthand what it's like to live with daily limitations that not everyone faces. EldeS was launched in Uruguay in March 2023 and has already garnered Educational and Ministerial Interest from ANEP, MEC and MTSS. It was launched in Argentina in October 2023, endorsed by the CUI, UBA, UTN.BA and Señas de Comunicación. EldeS seeks to scale throughout the rest of LATAM and Europe by 2024-2025.