Three versions of the same interface, optimised to suit different users’ needs. The middle panel is the default. The panel on the left was optimised for a person with cerebral palsy who makes large, spastic movements. The panel on the right is optimised for someone with muscular dystrophy who has difficulty moving the mouse quickly or over long distances. Image: University of Washington
Insert your key in the ignition of a luxury car and the seat and steering wheel will automatically adjust to preprogrammed body proportions. Stroll through the rooms of Bill Gates’ mansion and each room will adjust its lighting, temperature and music to accommodate your personal preference. But open any computer program and you’re largely subject to a design team’s ideas about button sizes, fonts and layouts.
Off-the-shelf designs are especially frustrating for the disabled, the elderly and anybody who has trouble controlling a mouse. A new approach to design, developed at the University of Washington, would put each person through a brief skills test and then generate a mathematically-based version of the user interface optimised for his or her vision and motor abilities. A paper describing the system, which for the first time offers an instantly customisable approach to user interfaces, was presented today in Chicago at a meeting of the Association for the Advancement of Artificial Intelligence.
“Assistive technologies are built on the assumption that it’s the people who have to adapt to the technology. We tried to reverse this assumption, and make the software adapt to people,” said lead author Krzysztof Gajos, a UW doctoral student in computer science and engineering. Co-authors are Dan Weld, a UW professor of computer science and engineering, and Jacob Wobbrock, an assistant professor in the UW’s Information School.