Hello there,
I am reaching you from Funchal in Madeira, Portugal, where I am attending ECCE 2008, the European Conference on Cognitive Ergonomics. Tomorrow I will present the first of two communications - Mnemonical Body Shortcuts: Improving Mobile Interaction.
I have been preparing my presentation and it is already available in my slideshare space.
I will try to add a slidecast if I have the time. I will also describe the project in this blog in the near future. For now you can check the publications:
Mobile Text-Entry Models for People with Disabilities
Tiago Guerreiro, Pedro Santana, Joaquim Jorge
ECCE 2008 - Proceedings of the European Conference on Cognitive Ergonomics, ACM DL
Mobile Accessibility
Here you can find news on mobile accessibility and usability. Accessibility to mobile devices and through mobile devices. I research and develop interfaces that enable disabled users to operate mobile devices but also interfaces where the mobile device is used to empower users to control the environment.
Tuesday, 16 September 2008
Thursday, 14 August 2008
Universal Accessibility: Are we there yet?
... well. No, son! We still have a long way to go. Indeed, Accessibility and Assistive Technologies are really just newborns. With no disregard for the excellent advances made in the last decades, still face several restrictions.
A few months ago I have made a presentation on Assistive Technologies for Spinal Cord Injuries. There are several solutions available for different impairments with several advantages and some disadvantages as well. See it below:
Two years ago I would see these approaches, be marvelled and innocently ask: are we there yet? What is missing for a tetraplegic to operate a computer (obviously this question does not consider the other losses tetraplegic users experience)?
Well, after being around several persons with tetraplegia I came to find that we are still in the begining and almost everything is still missing. The approaches, although valuable and suitable for particular situations, are rarely or never complete. They are fragmentary. As an example, we will look at a scenario where a tetraplegic person uses an eye-tracker to operate a computer. Well, we have seen eye-trackers and they are amazing. Actually, the user can be in front of the computer and control the mouse pointer and, with a little training, control any other application. Often, an integrated solution can offer access to the household appliances and enable the user to control his surrounding environment. Well, once again, why do I argue that we are so far away?
The answer relates to the ability to achieve therequired interaction conditions. Looking at the same example, the users need to be in front of the computer. And to be somehow restricted to that position. Well, I am a friend of some tetraplegic persons and they spend more than half their daily hours in bed....and does that have an assitive technology that assists them while in the chair, are not able to operate it in bed.We are talking part-time accessibility . Moreover, we are also talking about conditioned accessibility. The users still require a great deal of help from a third-party....And I am not just talking about the initial setup. Also I am not aiming at total independence (although that would be great). But a little liberty to choose is required.
This is my research focus and what I aim to achieve. Nowadays mobile devices play a weak role regarding assistive technologies for motorly disabled users but I believe that these devices characteristics make them a candidate solution to solve some of the aforementioned issues...I will come to you with this subject in another post.
A few months ago I have made a presentation on Assistive Technologies for Spinal Cord Injuries. There are several solutions available for different impairments with several advantages and some disadvantages as well. See it below:
Two years ago I would see these approaches, be marvelled and innocently ask: are we there yet? What is missing for a tetraplegic to operate a computer (obviously this question does not consider the other losses tetraplegic users experience)?
Well, after being around several persons with tetraplegia I came to find that we are still in the begining and almost everything is still missing. The approaches, although valuable and suitable for particular situations, are rarely or never complete. They are fragmentary. As an example, we will look at a scenario where a tetraplegic person uses an eye-tracker to operate a computer. Well, we have seen eye-trackers and they are amazing. Actually, the user can be in front of the computer and control the mouse pointer and, with a little training, control any other application. Often, an integrated solution can offer access to the household appliances and enable the user to control his surrounding environment. Well, once again, why do I argue that we are so far away?
The answer relates to the ability to achieve therequired interaction conditions. Looking at the same example, the users need to be in front of the computer. And to be somehow restricted to that position. Well, I am a friend of some tetraplegic persons and they spend more than half their daily hours in bed....and does that have an assitive technology that assists them while in the chair, are not able to operate it in bed.We are talking part-time accessibility . Moreover, we are also talking about conditioned accessibility. The users still require a great deal of help from a third-party....And I am not just talking about the initial setup. Also I am not aiming at total independence (although that would be great). But a little liberty to choose is required.
This is my research focus and what I aim to achieve. Nowadays mobile devices play a weak role regarding assistive technologies for motorly disabled users but I believe that these devices characteristics make them a candidate solution to solve some of the aforementioned issues...I will come to you with this subject in another post.
Saturday, 9 August 2008
NavTouch: Making Touch Screens Accessible to Blind Users
Touch screens have showed to be a successful and enthusiastic way of human computer interaction. Due to their fast learning curve, novice users benefit most from the directness of touch screen displays. The ability to directly touch and manipulate data on the screen without using any intermediary devices is a very strong appeal. However, these devices face several interaction issues, once again augmented in text input scenarios. While they also restrict the interaction performed by full capable users, blind individuals are disabled to interact as no feedback is offered. The problem in this scenario is even more drastic than when a physical keypad is available as the keys give the user the required cues to select targets (although obligating to memorize the associations).
Although pointing or selecting may be impossible, performing a gesture is not. We present an approach similar to NavTap (NavTouch) that uses the user’s capacity to perform a directional gesture and through it navigate in the alphabet (similarly to the keypad based approach). Once again, the user is not forced to memorize or guess any location in the screen as the interaction is limited to directional strokes.
Special actions are linked to the screen corners as those are easily identified. After performing a gesture, if the user keeps pressing the screen, the navigation will continue automatically in last direction. The bottom right corner of the screen erases the last character entered and the bottom left corner of the screen enters a space or other special characters. In contrast to keypad, where the user has to find the right key to press, with these gestures that extra cognitive load does not exist.
NavTouch outperforms NavTap because of the additional effort in finding the appropriate directional key with NavTap. Indeed, we found that users are able to quickly navigate in all four directions with NavTouch as gestures can start at almost any point on the screen with no extra associated load. Furthermore, users are able to write sentences faster with navigational approaches, improving their performance across sessions. Overall, experimental results show that navigational approaches are far easier to learn and users are able to improve performance without further training. Moreover, NavTouch was more effective than NavTap because of the more fluid mapping of gestures to actions.
See the video of a blind user testing the system...
Credits:
Hugo Nicolau
Paulo Lagoá
Tiago Guerreiro
Joaquim Jorge
Although pointing or selecting may be impossible, performing a gesture is not. We present an approach similar to NavTap (NavTouch) that uses the user’s capacity to perform a directional gesture and through it navigate in the alphabet (similarly to the keypad based approach). Once again, the user is not forced to memorize or guess any location in the screen as the interaction is limited to directional strokes.
Special actions are linked to the screen corners as those are easily identified. After performing a gesture, if the user keeps pressing the screen, the navigation will continue automatically in last direction. The bottom right corner of the screen erases the last character entered and the bottom left corner of the screen enters a space or other special characters. In contrast to keypad, where the user has to find the right key to press, with these gestures that extra cognitive load does not exist.
NavTouch outperforms NavTap because of the additional effort in finding the appropriate directional key with NavTap. Indeed, we found that users are able to quickly navigate in all four directions with NavTouch as gestures can start at almost any point on the screen with no extra associated load. Furthermore, users are able to write sentences faster with navigational approaches, improving their performance across sessions. Overall, experimental results show that navigational approaches are far easier to learn and users are able to improve performance without further training. Moreover, NavTouch was more effective than NavTap because of the more fluid mapping of gestures to actions.
See the video of a blind user testing the system...
Credits:
Hugo Nicolau
Paulo Lagoá
Tiago Guerreiro
Joaquim Jorge
Labels:
Accessibility,
Blind,
Mobile Device,
Text-Entry,
Touch Screen
Wednesday, 2 July 2008
NavTap and BrailleTap (Mobile text-Entry Interfaces for the Blind) presented at RESNA 2008
In the last few days I have been in Arlington, Virginia, attending RESNA 2008, the Rehabilitation Engineering and Assistive Technology Society of North America Annual Conference. I came to the conference as a presenter, to show the community the work we have been performing on mobile text-entry interfaces for blind users. The related materials can be accessed below:
The presentation went really well. The scientific papers were presented in Interactive Poster Sessions. Moreover, I got the chance to present our prototypes in the Developer's Forum. We had some interested attendees from practitioners, suppliers to end-users, that showed interest in using our system.
The presentation went really well. The scientific papers were presented in Interactive Poster Sessions. Moreover, I got the chance to present our prototypes in the Developer's Forum. We had some interested attendees from practitioners, suppliers to end-users, that showed interest in using our system.
Tuesday, 17 June 2008
BrailleTap: a mobile device text-entry method focused on the users
Regular mobile device text-entry methods are suitable for visually capable individuals and seek to improve user’s performance. Hence, it is possible for someone, with no experience, who doesn’t remember the location of a letter, to easily look and recognize the key where that letter is. Those methods imply that the fingers dance through the keyboard, choosing letters and special characters, among ten or more keys. Once again, we easily overcome this issue appealing to vision. A blind user is not able to do so. The mark present at key ‘5’ gives blind users the notion of the keypad layout but not feedback on the selected letter and, although users can make an effort to memorize a letter’s placement, feedback is essential. Even SMS experts need to occasionally look at the words being written to ensure message correction. Moreover, expertise is acquired by using the method and receiving the feedback. Only after an extensive and successful use of the writing mechanisms the users can get used to them and, in some cases, no longer need constant visual feedback. Screen readers overcome some of the issues as they offer the user feedback on the screen progress. However, keypad feedback is still inexistent which often leads to mistakes and sometimes giving up. Although users make sense on the message progress, they still have to know where to press to get the desired letter/action.
We can only offer visually impaired individuals mobile device accessibility if those devices can be easily available and usable. Therefore, based on user needs, capabilities and available devices we decided that, like in the NavTap method, our solutions should be compatible with regular mobile phones and, therefore, with the regular 12 key keyboard layout requiring no-extra hardware (i.e. expensive and heavy Braille keyboards). Having this in mind, we looked at the regular mobile phone keypad to find out a way of permitting Braille input without the needs of additional hardware.
BrailleTap focuses on a common knowledge to many blind users: the Braille Alphabet. Again, transforming the keypad functionalities is the basis of this new text-entry method. In the Braille alphabet, letters are formed by groups of 6 dots in a 3x2 cell.
Considering the keypad of a mobile phone we can map that cell on keys ‘2’, ‘3’, ‘5’, ‘6’, ‘8’ and ‘9’. Each press on these keys fills or blanks the respective dot. Key ‘4’ allows the user to enter the letter or, if all dots are blank, enter a space. For example, to enter the letter ‘b’, the user has to press keys ‘2’ and ‘5’ followed by key ‘4’. Finally, key ‘7’ erases the last character entered.
Although capitalized letters are not considered in these studies it would be possible to augment the functionality as some keys still remain available.
This method focuses on the user and replaces the non-memorisable keypad layout with a particular common knowledge within this user group.
Take a look at this video for a demonstration:
If you are interested in more detail, particularly about the user studies, you can take a look at our publications on BrailleTap:
Tiago Guerreiro, Paulo Lagoá, Pedro Santana, Daniel Gonçalves, Joaquim Jorge, Navtap and Brailletap: Non-visual input interfaces, RESNA 2008 - Rehabilitation Engineering and Assistive Technology Society of North America Conference, Arlington, USA, June 2008
Pedro Santana, Tiago Guerreiro, Joaquim Jorge, Braille Matrix, Proceedings of the International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion, Vila Real, Portugal, November 2007
Check the presentation I have made in Vila Real, at DSAI 2007, on this new text-entry method:
Credits:
Paulo Lagoá (Developer)
Pedro Santana (Developer)
Tiago Guerreiro (Developer Team Leader)
Joaquim Jorge (Adviser)
We can only offer visually impaired individuals mobile device accessibility if those devices can be easily available and usable. Therefore, based on user needs, capabilities and available devices we decided that, like in the NavTap method, our solutions should be compatible with regular mobile phones and, therefore, with the regular 12 key keyboard layout requiring no-extra hardware (i.e. expensive and heavy Braille keyboards). Having this in mind, we looked at the regular mobile phone keypad to find out a way of permitting Braille input without the needs of additional hardware.
BrailleTap focuses on a common knowledge to many blind users: the Braille Alphabet. Again, transforming the keypad functionalities is the basis of this new text-entry method. In the Braille alphabet, letters are formed by groups of 6 dots in a 3x2 cell.
Considering the keypad of a mobile phone we can map that cell on keys ‘2’, ‘3’, ‘5’, ‘6’, ‘8’ and ‘9’. Each press on these keys fills or blanks the respective dot. Key ‘4’ allows the user to enter the letter or, if all dots are blank, enter a space. For example, to enter the letter ‘b’, the user has to press keys ‘2’ and ‘5’ followed by key ‘4’. Finally, key ‘7’ erases the last character entered.
Although capitalized letters are not considered in these studies it would be possible to augment the functionality as some keys still remain available.
This method focuses on the user and replaces the non-memorisable keypad layout with a particular common knowledge within this user group.
Take a look at this video for a demonstration:
If you are interested in more detail, particularly about the user studies, you can take a look at our publications on BrailleTap:
Tiago Guerreiro, Paulo Lagoá, Pedro Santana, Daniel Gonçalves, Joaquim Jorge, Navtap and Brailletap: Non-visual input interfaces, RESNA 2008 - Rehabilitation Engineering and Assistive Technology Society of North America Conference, Arlington, USA, June 2008
Pedro Santana, Tiago Guerreiro, Joaquim Jorge, Braille Matrix, Proceedings of the International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion, Vila Real, Portugal, November 2007
Check the presentation I have made in Vila Real, at DSAI 2007, on this new text-entry method:
Credits:
Paulo Lagoá (Developer)
Pedro Santana (Developer)
Tiago Guerreiro (Developer Team Leader)
Joaquim Jorge (Adviser)
Friday, 13 June 2008
NavTap: a navigational text-entry model for blind users
Mobile devices play an important role on modern society. Their functionalities go beyond the basic communication, gathering a large set of productivity and leisure applications. The interaction with these devices is highly visually demanding disabling blind users to achieve control. Particularly, text-entry, a task that is transversal to several mobile applications, is difficult to accomplish as it relies on visual feedback both from the keypad and screen. Although there are specialized solutions to overcome this problem, those are ineffective. Hardware solutions are unsuitable to a mobile context and software approaches are adaptations that remain ineffective, hard to learn and error prone.
The main obstacle for a blind user to operate a regular mobile device is the need to memorize the position of each letter. To circumvent the lack of visual feedback, both output and input information must be offered through available channels. It is important to notice that possible communication channels, like tact or audition, are over-developed and the users are likely to perform better than a full capable user if the interaction is based on those senses. By adapting the interaction processes we minimize stress scenarios and encourage learning.
The NavTap text-entry method allows the user to navigate through the alphabet using the mobile phone keypad. The alphabet was divided in five lines, each starting with a different vowel as these are easy to remember. Using the mark on key ‘5’ we can map a cursor on the keypad using the keys ‘2’, ‘4’, ‘6’ and ‘8’. Keys ‘4’ and ‘6’ allow the user to navigate horizontally through the letters while keys ‘2’ and ‘8’ allow the user to jump between the vowels, turning them into key points in the alphabet. Both navigations (vertical and horizontal) are cyclical, which means that the user can go, for instance, from the letter 'z' to the letter 'a', and from the vowel 'u' to 'a'.
Key ‘5’ enters a space or other special characters and key ‘7’ erases the last character entered. This method drastically reduces memorizing requirements, therefore reducing the cognitive load. In a worst case scenario, where the user does not have a good alphabet mental mapping, he can simply navigate straight forward until he hears the desired letter. There are no wrong buttons, just shorter paths. Blind users can rely on audio feedback before accepting any letter, increasing the text-entry task success and the motivation to improve writing skills.
Text-entry interfaces that consider the users’ needs and capabilities are likely to ease the first contact and allow performance improvement. Considering text input for blind users, results showed that, if the cognitive load is removed and the users are presented with easier and user-centered interfaces, success is achieved as the first contact has a small error rate and the learning curve is accentuated. It is therefore possible to offer blind users with effective interfaces that require no extra hardware and permit usage by a wide set of users even those with no previous acquaintance with mobile devices.
If you are insterested in more detail, particularly about the user studies, you can take a look at our publications on NavTap:
Tiago Guerreiro, Paulo Lagoá, Pedro Santana, Daniel Gonçalves, Joaquim Jorge, Navtap and Brailletap: Non-visual input interfaces, RESNA 2008 - Rehabilitation Engineering and Assistive Technology Society of North America Conference
Paulo Lagoá, Pedro Santana, Tiago Guerreiro, Daniel Gonçalves, Joaquim Jorge, Blono: a New Mobile Text-entry Interface for the Visually Impaired, Springer Lecture Notes in Computer Science, Universal Access in HCI Part II, HCII 2007, LNCS 4555, pp. 908–917, Beijing, China, July 2007
Check the presentation I have made in China, at HCII 2007, on this new text-entry method:
(This presentation has been featured in the slideshare main page which makes me proud. Thank you Garr Reynolds for your insights)
Credits:
Paulo Lagoá (Developer)
Pedro Santana (Developer)
Tiago Guerreiro (Developer Team Leader)
Joaquim Jorge (Adviser)
Check back soon for new updates on text-entry on touch screen based mobile devices for blind users.
Any questions or comments are welcomed...
The main obstacle for a blind user to operate a regular mobile device is the need to memorize the position of each letter. To circumvent the lack of visual feedback, both output and input information must be offered through available channels. It is important to notice that possible communication channels, like tact or audition, are over-developed and the users are likely to perform better than a full capable user if the interaction is based on those senses. By adapting the interaction processes we minimize stress scenarios and encourage learning.
The NavTap text-entry method allows the user to navigate through the alphabet using the mobile phone keypad. The alphabet was divided in five lines, each starting with a different vowel as these are easy to remember. Using the mark on key ‘5’ we can map a cursor on the keypad using the keys ‘2’, ‘4’, ‘6’ and ‘8’. Keys ‘4’ and ‘6’ allow the user to navigate horizontally through the letters while keys ‘2’ and ‘8’ allow the user to jump between the vowels, turning them into key points in the alphabet. Both navigations (vertical and horizontal) are cyclical, which means that the user can go, for instance, from the letter 'z' to the letter 'a', and from the vowel 'u' to 'a'.
Navigation scenarios for the letter 't'
Key ‘5’ enters a space or other special characters and key ‘7’ erases the last character entered. This method drastically reduces memorizing requirements, therefore reducing the cognitive load. In a worst case scenario, where the user does not have a good alphabet mental mapping, he can simply navigate straight forward until he hears the desired letter. There are no wrong buttons, just shorter paths. Blind users can rely on audio feedback before accepting any letter, increasing the text-entry task success and the motivation to improve writing skills.
(See a blind user operating the system [in portuguese])
Text-entry interfaces that consider the users’ needs and capabilities are likely to ease the first contact and allow performance improvement. Considering text input for blind users, results showed that, if the cognitive load is removed and the users are presented with easier and user-centered interfaces, success is achieved as the first contact has a small error rate and the learning curve is accentuated. It is therefore possible to offer blind users with effective interfaces that require no extra hardware and permit usage by a wide set of users even those with no previous acquaintance with mobile devices.
If you are insterested in more detail, particularly about the user studies, you can take a look at our publications on NavTap:
Tiago Guerreiro, Paulo Lagoá, Pedro Santana, Daniel Gonçalves, Joaquim Jorge, Navtap and Brailletap: Non-visual input interfaces, RESNA 2008 - Rehabilitation Engineering and Assistive Technology Society of North America Conference
Paulo Lagoá, Pedro Santana, Tiago Guerreiro, Daniel Gonçalves, Joaquim Jorge, Blono: a New Mobile Text-entry Interface for the Visually Impaired, Springer Lecture Notes in Computer Science, Universal Access in HCI Part II, HCII 2007, LNCS 4555, pp. 908–917, Beijing, China, July 2007
Check the presentation I have made in China, at HCII 2007, on this new text-entry method:
(This presentation has been featured in the slideshare main page which makes me proud. Thank you Garr Reynolds for your insights)
Credits:
Paulo Lagoá (Developer)
Pedro Santana (Developer)
Tiago Guerreiro (Developer Team Leader)
Joaquim Jorge (Adviser)
Check back soon for new updates on text-entry on touch screen based mobile devices for blind users.
Any questions or comments are welcomed...
Monday, 9 June 2008
Extending Accessibility to Mobile Devices
Hello There!
My name is Tiago Guerreiro and I am a PhD student at the Technical Superior Institute, TU Lisbon in Portugal as well as a researcher at INESC-ID, at the Visualization and Intelligent Multimodal Interfaces Group (VIMMI), under the supervision of Prof. Joaquim Jorge.
My research interests are on Accessibility, Usability and Multimodal Interfaces but particularly on the aspects of Usability and Accessibility to Mobile Devices. At my research group (VIMMI) I am able to lead projects on mobile accessibility. We have already accomplished some good results with blind and tetraplegic persons making it possible and easier for those users to operate a mobile device and through it, control other devices.
In this blog I will regularly present our results as well as relevant innovations on accessibility and assistive technologies and, once again, particularly on mobile accessibility.
Feel free to regularly check for news and comment my posts.
My name is Tiago Guerreiro and I am a PhD student at the Technical Superior Institute, TU Lisbon in Portugal as well as a researcher at INESC-ID, at the Visualization and Intelligent Multimodal Interfaces Group (VIMMI), under the supervision of Prof. Joaquim Jorge.
My research interests are on Accessibility, Usability and Multimodal Interfaces but particularly on the aspects of Usability and Accessibility to Mobile Devices. At my research group (VIMMI) I am able to lead projects on mobile accessibility. We have already accomplished some good results with blind and tetraplegic persons making it possible and easier for those users to operate a mobile device and through it, control other devices.
In this blog I will regularly present our results as well as relevant innovations on accessibility and assistive technologies and, once again, particularly on mobile accessibility.
Feel free to regularly check for news and comment my posts.
Subscribe to:
Posts (Atom)