“Handsfree interfaces for low powered devices using monocular RGB webcams.” A fallback would be “Emotion-based web analytics”.
Unfortunately, many people with disabilities are also very low income and can’t afford devices capable of running inference on live feeds (actually, I suspect this is the case for most of humanity). This means that they must rely on specialized software and hardware, and require a human assistant to set it up.
I’ve built a no-download, no-install, no-setup solution (https://browsehandsfree.com) by building a web proxy. Because all web traffic goes through my site, and because we can collect inference data (all opt-in), we can do things no other company is currently able to (including Google):
– Emotion-based analytics (what were the person’s facial expressions as they browsed the site)
– Predictive, gesture-less controls
– So much more…we essentially have labeled video with each interaction a user takes on every website they visit
It’s hard to express just how transformative handsfree interfaces can be. Not only would it enable 10’s of millions with disabilities around the world to access the web FULLY (both consuming and creating content) but it opens up novel touchless applications that can be built using standard web technologies, including:
– Schools/Museum exhibits
– Industry (using the web when you have heavy gloves)
– Using your phone when it’s cold out and you have gloves on
– Cooking, gardening, and cleaning w/o getting your device dirty
– …essentially anything else you can do on a phone with one finger”