- My background is in physics and mathematics but have been programming for over 5 years, specializing in Python. I’ve been getting into data science and machine learning over the past year and a half through self-study and 3rd-party courses such as Udacity’s Nanodegrees.
“I want to research how to apply natural language processing in regards to American Sign Language (ASL) and other sign languages. I have started doing research in this area myself (www.github.com/MrGeislinger/UdacityMLND_Capstone) because of my knowledge in ASL and machine learning. I discovered that there is very little research and progress done in this subject.
ASL does not correspond to English as a one-to-one translation and instead has different syntax and subtleties due to it’s visual nature such as grammar and vocabulary dependent on spatial position of signs. I have done work with still images but image recognition via deep learning on video clips and using NLP techniques to better translate this visual language to a written language, like English, would be extremely valuable.
This topic is highly applicable to the UN’s Sustainable Development Goals as this research can help give Deaf individuals and communities a better method in interacting with the general population without the reliance of a translator, most of whom tend to be a limited resource especially in impoverished areas. This research can give more access for the Deaf to education and other essential services around the world by using their native language.”