Spatial Mapping Research at Tufts IDEA Lab

Image of lab members in US Volpe Center for the 2024 NEC HFES Conference

Selected to present at the US Volpe Center’s 2024 NEC HFES Conference

My Roles:

Human Factors Engineering Lead

Research Assistant

Skills:

Assistive Technology

Engineering Management

Agile Development

Tools:

3D SOLIDWORKS

QualtricsXM

Usability & Concept Testing

User Interviews

Timeline:

Sept 2023 - Present

Enhancing Indoor Navigation for the Visually Impaired with Sensor-Based Assistive Technology

Abstract: Assistive technology for low vision or blindness primarily focuses on outdoor navigation using tools like service animals, white canes, or sensory perception. Recent research integrates motion and distance sensors like IMUs and LiDARs into digital assistive devices. Our research aims to develop a sensor-based assistive device for indoor navigation, enhancing independence and safety for individuals with visual impairments. Initial interviews identified indoor use cases and pain points. We propose creating 3D distance data using spatial data from 2D LiDAR, ultrasonic, and IMU sensors in Phase 1, followed by translating this data into user-friendly signals in Phase 2, focusing on spatialized audio and haptic feedback. We've developed three prototypes with different sensor placements: hand gloves, paddles, and a pin. Initial tests show the device's ability to detect objects and provide real-time feedback. Despite being in the prototype phase, our work paves the way for a user-centric, adaptable assistive device, improving the lives of individuals with visual impairments.

The Challenge

  1. 77 million people globally have visual impairments that are untreatable

  2. Assistive technologies, including service animals and white canes, focus largely on detecting obstacles at foot level

  3. Up to 30% of blind individuals already prefer to use echolocation to navigate their environments

“How might we improve the independence and safety of individuals with visual impairments as they navigate indoor environments?”

Discovery

We conducted discovery interviews with individuals who have visual impairments and interviews with individuals working in the audio accessibility space. Our conversation led us to explore Microsoft Soundscape and Blind Match Racing as models for understanding how to improve audio based assistive navigation.

LiDAR live updating feedback —>

<— Indoor environment change

Next Steps:

  • Undergoing IRB approval for usability study with Perkins School for the Blind

  • Collaborating with music engineers to define library of spatialized audio cues

  • Improving end-to-end system latency to 75 milliseconds

Reflections:

Importance of Co-Creation

Throughout this design process, I’ve learned that making assumptions about users' needs without their direct input can lead to solutions that fall short of their true requirements. Engaging users through co-creation has been a transformative experience, revealing critical insights and uncovering new solution spaces that we might have otherwise missed such as indoor navigation.

Balancing Interdisciplinary Collaboration

Balancing input from audio engineers, accessibility professionals, and our visually impaired users required careful evaluation to determine which feedback elements would most effectively enhance our design. This process has sharpened my ability to synthesize diverse perspectives to apply the most relevant and impactful insights to create a cohesive and effective solution.

Our Approach

With the growing preference for audio-based information, we developed a framework that seamlessly aligns our device's capabilities with the natural ways visually impaired individuals perceive and interact with their surroundings. By centering audio as the primary mode of information delivery, we created a responsive and intuitive experience that supports real-time spatial decision-making and navigation, boosting their confidence and independence, especially in indoor environments where other aids fall short in providing spatial guidance.

Progress to Date

We have successfully completed Phase 1, where we focused on creating accurate 3D distance data by integrating spatial information from 2D LiDAR, ultrasonic, and IMU sensors. This achievement allowed us to not only refine the spatial mapping process but also to develop a functional prototype for our wearable device.

Next
Next

TEDX Tufts Graphic Design