Friday 13 June 2008

NavTap: a navigational text-entry model for blind users

Mobile devices play an important role on modern society. Their functionalities go beyond the basic communication, gathering a large set of productivity and leisure applications. The interaction with these devices is highly visually demanding disabling blind users to achieve control. Particularly, text-entry, a task that is transversal to several mobile applications, is difficult to accomplish as it relies on visual feedback both from the keypad and screen. Although there are specialized solutions to overcome this problem, those are ineffective. Hardware solutions are unsuitable to a mobile context and software approaches are adaptations that remain ineffective, hard to learn and error prone.

The main obstacle for a blind user to operate a regular mobile device is the need to memorize the position of each letter. To circumvent the lack of visual feedback, both output and input information must be offered through available channels. It is important to notice that possible communication channels, like tact or audition, are over-developed and the users are likely to perform better than a full capable user if the interaction is based on those senses. By adapting the interaction processes we minimize stress scenarios and encourage learning.

The NavTap text-entry method allows the user to navigate through the alphabet using the mobile phone keypad. The alphabet was divided in five lines, each starting with a different vowel as these are easy to remember. Using the mark on key ‘5’ we can map a cursor on the keypad using the keys ‘2’, ‘4’, ‘6’ and ‘8’. Keys ‘4’ and ‘6’ allow the user to navigate horizontally through the letters while keys ‘2’ and ‘8’ allow the user to jump between the vowels, turning them into key points in the alphabet. Both navigations (vertical and horizontal) are cyclical, which means that the user can go, for instance, from the letter 'z' to the letter 'a', and from the vowel 'u' to 'a'.


Navigation scenarios for the letter 't'

Key ‘5’ enters a space or other special characters and key ‘7’ erases the last character entered. This method drastically reduces memorizing requirements, therefore reducing the cognitive load. In a worst case scenario, where the user does not have a good alphabet mental mapping, he can simply navigate straight forward until he hears the desired letter. There are no wrong buttons, just shorter paths. Blind users can rely on audio feedback before accepting any letter, increasing the text-entry task success and the motivation to improve writing skills.




(See a blind user operating the system [in portuguese])

Text-entry interfaces that consider the users’ needs and capabilities are likely to ease the first contact and allow performance improvement. Considering text input for blind users, results showed that, if the cognitive load is removed and the users are presented with easier and user-centered interfaces, success is achieved as the first contact has a small error rate and the learning curve is accentuated. It is therefore possible to offer blind users with effective interfaces that require no extra hardware and permit usage by a wide set of users even those with no previous acquaintance with mobile devices.

Blind user testing the system

If you are insterested in more detail, particularly about the user studies, you can take a look at our publications on NavTap:

Tiago Guerreiro, Paulo Lagoá, Pedro Santana, Daniel Gonçalves, Joaquim Jorge, Navtap and Brailletap: Non-visual input interfaces, RESNA 2008 - Rehabilitation Engineering and Assistive Technology Society of North America Conference

Paulo Lagoá, Pedro Santana, Tiago Guerreiro, Daniel Gonçalves, Joaquim Jorge, Blono: a New Mobile Text-entry Interface for the Visually Impaired, Springer Lecture Notes in Computer Science, Universal Access in HCI Part II, HCII 2007, LNCS 4555, pp. 908–917, Beijing, China, July 2007


Check the presentation I have made in China, at HCII 2007, on this new text-entry method:



(This presentation has been featured in the slideshare main page which makes me proud. Thank you Garr Reynolds for your insights)


Credits:
Paulo Lagoá (Developer)
Pedro Santana (Developer)
Tiago Guerreiro (Developer Team Leader)
Joaquim Jorge (Adviser)

Check back soon for new updates on text-entry on touch screen based mobile devices for blind users.

Any questions or comments are welcomed...

2 comments:

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...
This comment has been removed by a blog administrator.