... well. No, son! We still have a long way to go. Indeed, Accessibility and Assistive Technologies are really just newborns. With no disregard for the excellent advances made in the last decades, still face several restrictions.
A few months ago I have made a presentation on Assistive Technologies for Spinal Cord Injuries. There are several solutions available for different impairments with several advantages and some disadvantages as well. See it below:
Two years ago I would see these approaches, be marvelled and innocently ask: are we there yet? What is missing for a tetraplegic to operate a computer (obviously this question does not consider the other losses tetraplegic users experience)?
Well, after being around several persons with tetraplegia I came to find that we are still in the begining and almost everything is still missing. The approaches, although valuable and suitable for particular situations, are rarely or never complete. They are fragmentary. As an example, we will look at a scenario where a tetraplegic person uses an eye-tracker to operate a computer. Well, we have seen eye-trackers and they are amazing. Actually, the user can be in front of the computer and control the mouse pointer and, with a little training, control any other application. Often, an integrated solution can offer access to the household appliances and enable the user to control his surrounding environment. Well, once again, why do I argue that we are so far away?
The answer relates to the ability to achieve therequired interaction conditions. Looking at the same example, the users need to be in front of the computer. And to be somehow restricted to that position. Well, I am a friend of some tetraplegic persons and they spend more than half their daily hours in bed....and does that have an assitive technology that assists them while in the chair, are not able to operate it in bed.We are talking part-time accessibility . Moreover, we are also talking about conditioned accessibility. The users still require a great deal of help from a third-party....And I am not just talking about the initial setup. Also I am not aiming at total independence (although that would be great). But a little liberty to choose is required.
This is my research focus and what I aim to achieve. Nowadays mobile devices play a weak role regarding assistive technologies for motorly disabled users but I believe that these devices characteristics make them a candidate solution to solve some of the aforementioned issues...I will come to you with this subject in another post.
Here you can find news on mobile accessibility and usability. Accessibility to mobile devices and through mobile devices. I research and develop interfaces that enable disabled users to operate mobile devices but also interfaces where the mobile device is used to empower users to control the environment.
Thursday, 14 August 2008
Saturday, 9 August 2008
NavTouch: Making Touch Screens Accessible to Blind Users
Touch screens have showed to be a successful and enthusiastic way of human computer interaction. Due to their fast learning curve, novice users benefit most from the directness of touch screen displays. The ability to directly touch and manipulate data on the screen without using any intermediary devices is a very strong appeal. However, these devices face several interaction issues, once again augmented in text input scenarios. While they also restrict the interaction performed by full capable users, blind individuals are disabled to interact as no feedback is offered. The problem in this scenario is even more drastic than when a physical keypad is available as the keys give the user the required cues to select targets (although obligating to memorize the associations).
Although pointing or selecting may be impossible, performing a gesture is not. We present an approach similar to NavTap (NavTouch) that uses the user’s capacity to perform a directional gesture and through it navigate in the alphabet (similarly to the keypad based approach). Once again, the user is not forced to memorize or guess any location in the screen as the interaction is limited to directional strokes.
Special actions are linked to the screen corners as those are easily identified. After performing a gesture, if the user keeps pressing the screen, the navigation will continue automatically in last direction. The bottom right corner of the screen erases the last character entered and the bottom left corner of the screen enters a space or other special characters. In contrast to keypad, where the user has to find the right key to press, with these gestures that extra cognitive load does not exist.
NavTouch outperforms NavTap because of the additional effort in finding the appropriate directional key with NavTap. Indeed, we found that users are able to quickly navigate in all four directions with NavTouch as gestures can start at almost any point on the screen with no extra associated load. Furthermore, users are able to write sentences faster with navigational approaches, improving their performance across sessions. Overall, experimental results show that navigational approaches are far easier to learn and users are able to improve performance without further training. Moreover, NavTouch was more effective than NavTap because of the more fluid mapping of gestures to actions.
See the video of a blind user testing the system...
Credits:
Hugo Nicolau
Paulo LagoĆ”
Tiago Guerreiro
Joaquim Jorge
Although pointing or selecting may be impossible, performing a gesture is not. We present an approach similar to NavTap (NavTouch) that uses the user’s capacity to perform a directional gesture and through it navigate in the alphabet (similarly to the keypad based approach). Once again, the user is not forced to memorize or guess any location in the screen as the interaction is limited to directional strokes.
Special actions are linked to the screen corners as those are easily identified. After performing a gesture, if the user keeps pressing the screen, the navigation will continue automatically in last direction. The bottom right corner of the screen erases the last character entered and the bottom left corner of the screen enters a space or other special characters. In contrast to keypad, where the user has to find the right key to press, with these gestures that extra cognitive load does not exist.
NavTouch outperforms NavTap because of the additional effort in finding the appropriate directional key with NavTap. Indeed, we found that users are able to quickly navigate in all four directions with NavTouch as gestures can start at almost any point on the screen with no extra associated load. Furthermore, users are able to write sentences faster with navigational approaches, improving their performance across sessions. Overall, experimental results show that navigational approaches are far easier to learn and users are able to improve performance without further training. Moreover, NavTouch was more effective than NavTap because of the more fluid mapping of gestures to actions.
See the video of a blind user testing the system...
Credits:
Hugo Nicolau
Paulo LagoĆ”
Tiago Guerreiro
Joaquim Jorge
Labels:
Accessibility,
Blind,
Mobile Device,
Text-Entry,
Touch Screen
Subscribe to:
Posts (Atom)