CES 2020

OrCam Unveils Wearable Personal Assistant, New Technologies for People With Disabilities

With new natural language processing abilities and other AI technologies, OrCam’s devices help vision and hearing impaired people navigate day-to-day life with greater ease

Lilach Baumer, Las Vegas 13:1709.01.20
Humans have already trained artificial intelligence technologies to see, hear, speak, and think. Israeli company OrCam Technologies Ltd. is leveraging AI technologies to help vision and hearing impaired people navigate their daily lives with greater ease. On Tuesday, at this year’s Consumer Electronics Show (CES) in Las Vegas, OrCam presented some of its newest advances: a wearable device that through rhythmic beeping can help people become orientated in their surroundings without depending on eyesight; an artificial intelligence-based personal assistant that can help visually impaired people order from a menu, sort through documents, or pull a number out of an entire page of text; and a device that integrates with a standard hearing aid to help hearing impaired people follow a conversation in a loud, packed room.

 

OrCam’s flagship device is the MyEye 2, a finger-sized device fitted with a camera and a microphone that, when clipped onto glasses, reads printed and digital text aloud to the person wearing it. The device, which also aids with facial recognition and differentiating between banknotes, for example, was recently included in Time Magazine’s list of the 100 best inventions of 2019.

Mobileye co-founder Amnon Shashua.  Photo: Mobileye PR Mobileye co-founder Amnon Shashua. Photo: Mobileye PR

 

On Tuesday, OrCam unveiled new AI capabilities for the device, which effectively turn it into a personal assistant. More than just a camera that reacts to movement, the new and improved MyEye can interact with the user via voice command. “Like they would interact with Apple’s Siri, the new abilities allow users to communicate with the system to extract information from their surroundings,” OrCam co-founder Amnon ShaShua told Calcalist in a Tuesday interview.

 

“If I’m holding the phone bill in my hand, I’m not interested in the device reading out the entire thing—I want to know the bottom line,” Shashua said. “I can command the device to read out the highlighted segment of a certain text, or scan it for a specific word. If I open the menu at a restaurant, I can command it to only read out the appetizer list, or just the meat dishes,” he explained.

 

Right now, the device’s new functions are only available in English, but the company intends to make it compatible with additional languages by the end of the year.

 

The most significant technological leap, according to Shashua, is the addition of natural language processing abilities, in addition to the device’s existing machine vision capabilities. The second is orientation in space. Orcam’s most recent MyEye model beeps faster when it recognizes that the wearer approaches a door, or emits rhythmic beeps that can guide the wearer towards an object—say, a glass of water on a table. Right now, the device has a library of 15 objects it can recognize, and Shashua said it is expected to grow.

 

“We took all the existing abilities of the device—facial recognition, barcode reading, object identification, and added the verbal interaction with the device,” Shashua said.

 

The device, which increases the independence of vision and hearing impaired individuals, does not require an internet connection or cloud computing, Shashua said. “It has to work in real-time . Camera, computing, and a lot of AI algorithms make this a reality.”

 

But it is the new product OrCam unveiled at the event, OrCam Hear, that is the real star of this show. OrCam Hear wants to solve a conundrum that has perplexed scientists for decades: the cocktail party problem. According to Shashua, this is a situation when you are talking to someone in a noisy room where countless other conversations are taking place simultaneously. Our brain is very sophisticated, Shashua said. It allows us to focus on the conversation and neutralize the surrounding noise, but hearing devices cannot do that. Instead, they enhance all the voices indiscriminately, and the effect is sheer cacophony.

 

According to him, the device represents a real technological breakthrough. “The ability to combine video, the acoustic wave, see the lip movements—we've been working on it for two years now, and we've been able to deliver a product that will be released to the market by mid-2020.”

 

The small device hangs like a locket around the user’s neck. When they look at their conversation partner, the device communicates with the earpiece via bluetooth, isolating the person’s voice in accordance with their lip movement. The device is far from complete—voice is still transmitted with delays—but the hardware is working and there are only the final tunings left, according to Shashua. Before it hits the market, the company intends to add two more levels of voice isolation, one based on sound signature samples, and another that will just reduce background noise.

 

The device was named Best of Innovation as part of CES 2020 Innovation Awards. “The fact that it has been getting attention shows that there is a real need,” Shashua said.

 

OrCam came to CES to establish partnerships, Shashua said. As a first stage, the company wants to connect with hearing aid companies who might offer their technology as an add-on.

 

From day one, OrCam’s goal was AI as a companion, Shashua said. “At the end, every person would want a little device that can hear and see and think.” After tackling aids for vision and hearing impairments, the company is now working on an aid for professionals who as part of their everyday work must create reports following interactions with people, such as physicians working in a hospital.

 

“Doctors talk to patients and then have to write a report, and research shows that about 40% of physicians’ time is spent in front of the computer,” Shashua said. “If there is a device that can recognize the people who are interacting, what the doctor is saying and what the patient is saying, and communicates it to a server on my computer, then most of the work is already done,” he said, adding that a process like this could save as much as 90% of the time invested in writing reports. “It leaves doctors more time to care for patients, and there is a very large need for that.”

 

 

The author was a guest of Intel subsidiary Mobileye at CES. OrCam was founded by Mobileye co-founders Ziv Aviram and Amnon Shashua.