Show simple item record

dc.contributor.advisorHammond, Tracy
dc.creatorRajanna, Vijay Dandur
dc.date.accessioned2019-01-23T20:06:45Z
dc.date.available2020-12-01T07:32:20Z
dc.date.created2018-12
dc.date.issued2018-11-06
dc.date.submittedDecember 2018
dc.identifier.urihttps://hdl.handle.net/1969.1/174462
dc.description.abstractEvery day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. Unfortunately, individuals with physical impairments and disabilities, by birth or due to an injury, are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions. In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate “point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts’ Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews. From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the “Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts’ Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch. Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. We have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general. Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks. Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectHuman-Computer Interactionen
dc.subjectAccessibilityen
dc.subjectEye trackingen
dc.subjectText entryen
dc.subjectGaze-assisted authenticationen
dc.subjectGaze gesturesen
dc.titleAddressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigmen
dc.typeThesisen
thesis.degree.departmentComputer Science and Engineeringen
thesis.degree.disciplineComputer Scienceen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberKerne, Andruid
dc.contributor.committeeMemberShell, Dylan
dc.contributor.committeeMemberSmith, Steven M.
dc.type.materialtexten
dc.date.updated2019-01-23T20:06:45Z
local.embargo.terms2020-12-01
local.etdauthor.orcid0000-0001-7550-0411


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record