A GPU-Accelerated Real-Time Contextual Awareness Application for the Visually Impaired on Google’s Project Tango Device

Rabia Jafri, The Journal of Supercomputing, Springer, 2017, Volume 73, Issue 2, pp. 887-899.

Abstract: An application for the recently introduced Google Project Tango Tablet Development Kit to assist visually impaired (VI) users in understanding their environmental context by identifying and locating multiple faces and objects in their vicinity in real-time is presented. CUDA-based GPU-accelerated algorithms would be utilized to detect and recognize faces and objects from the visual data while the locations of these entities relative to the user would be estimated from the depth data acquired via the tablet. The interaction would be speech-based with the user being offered several options for requesting information about the identities and/or relative locations of face and objects. The aim is to create a portable, affordable, power-efficient, standalone assistive application to increase the autonomy of VI users which can run in real-time on the device itself.