Assistive Wearable Bone Conduction Headset: Providing Solutions to People with Visual Impairments

AISee is a device that assists people with visual impairments. People with visual impairments experience simple tasks, such as grocery shopping, to be an essential difficulty.

For example, image recognition engines require a clear shot of the targeted object and a contextual understanding of the information the user requires.

AISee iteratively tailored a truly wearable assistive device in the shape of a bone-conduction headset to meet our target group’s needs. We then evaluated this prototype by showing success rates at 80% when recognizing grocery items for users in a controlled environment.

Encouragingly enough, AISee has potential implications not just for PVI but also for other populations who are challenged by their surroundings.

How does AISee work?

AISee is a prototype that wants to make the world more accessible for people who are blind or have low vision. It employs:

  • A micro-camera;
  • cloud AI algorithms;
  • and bone conduction headphone technology to help those with visual impairments see what they would normally be unable to.

The prototype, developed by a team of professionals from the Augmented Human Lab is called AISee. It was created with $100,000 in funding from Google’s Launchpad Accelerator program, which helps early-stage startups accelerate their growth.

AISee has a micro-camera that looks at what the wearer is pointing at through bone conduction technology to inform them without blocking their ears. The device also incorporates cloud AI algorithms that are trained on images captured by the camera to tell users information about what they are looking at while simultaneously being able to see it themselves when necessary or preferred.

An architectural overview

The overall interaction flow of this system is similar to systems in image recognition, such as the phone application or wearable devices. The system is in standby mode to save power and await the user’s initiation of recognition. Once active, it captures an image. Then, the user’s interest region is determined based on where they point. Features in the selected area are elaborated on and delivered through audio to help you find what you’re looking for.

Leave a Reply