Cornell University in the United States has created a pair of AI-enabled eyewear that can recognize orders discreetly uttered by a user to their smartphone just by monitoring the wearer’s lip and mouth movements. This device, developed by Cornell University’s SciFi (Smart Computer Interfaces for Future Interactions) laboratory, was designed to allow people to unlock and use a smartphone in any situation, including when there is a lot of noise (as in a stadium or a nightclub) or when silence is required (as in a library).
There’s no need to utter a command aloud in this case; simply quietly mouth the phrase, and the glasses will operate as a relay between the wearer and the smartphone. To manage playlists, this now entails discreetly pronouncing the phone’s access code or instructions such as “louder,” “forward,” or “stop.”
This concept, as constructed, is quite small and, most importantly, needs extremely little electricity. These EchoSpeech glasses function by producing and receiving sound waves via the face, detecting even the smallest movement of the lips and mouth.
The artificial intelligence can recognize the request being made based on the type of echo profile returned. It just takes a few minutes of user training for artificial intelligence to detect and perform roughly 30 instructions and numbers on the smartphone.
Although it is currently a prototype, this effort has the potential to be marketed in the future. It might potentially help those with speech impairments, or if paired with a voice synthesizer, it could offer persons who cannot normally express themselves a voice.