Show simple item record

dc.contributor.advisorShipman, Frank
dc.contributor.advisorGutierrez-Osuna, Ricardo
dc.creatorDuggina, Satyakiran
dc.date.accessioned2016-04-06T16:15:19Z
dc.date.available2016-04-06T16:15:19Z
dc.date.created2015-12
dc.date.issued2015-11-30
dc.date.submittedDecember 2015
dc.identifier.urihttps://hdl.handle.net/1969.1/156211
dc.description.abstractSign language is the primary medium of communication for people who are hearing impaired. Sign language videos are hard to discover in video sharing sites as the text-based search is based on metadata rather than the content of the videos. The sign language community currently shares content through ad-hoc mechanisms as no library meets their requirements. Low cost or even real-time classification techniques are valuable to create a sign language digital library with its content being updated as new videos are uploaded to YouTube and other video sharing sites. Prior research was able to detect sign language videos using face detection and background subtraction with recall and precision that is suitable to create a digital library. This approach analyzed one minute of each video being classified. Polar Motion Profiles achieved better recall with videos containing multiple signers but at a significant computational cost as it included five face trackers. This thesis explores techniques to reduce the computation time involved in feature extraction without overly impacting precision and recall deeply. This thesis explores three optimizations to the above techniques. First, we compared the individual performance of the five face detectors and determined the best performing single face detector. Second, we evaluated the performance detection using Polar Motion Profiles when face detection was performed on sampled frames rather than detecting in every frame. From our results, Polar Motion Profiles performed well even when the information between frames is sacrificed. Finally, we looked at the effect of using shorter video segment lengths for feature extraction. We found that the drop in precision is minor as video segments were made shorter from the initial empirical length of a minute. Through our work, we found an empirical configuration that can classify videos with close to two orders of magnitude less computation but with precision and recall not too much below the original voting scheme. Our model improves detection time of sign language videos that in turn would help enrich the digital library with fresh content quickly. Future work can be focused on enabling diarization by segmenting the video to find sign language content and non-sign language content with effective background subtraction techniques for shorter videos.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectSign Languageen
dc.subjectPolar Motion Profilesen
dc.titleEvaluation of Alternative Face Detection Techniques and Video Segment Lengths on Sign Language Detectionen
dc.typeThesisen
thesis.degree.departmentComputer Science and Engineeringen
thesis.degree.disciplineComputer Scienceen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameMaster of Scienceen
thesis.degree.levelMastersen
dc.contributor.committeeMemberAkleman, Ergun
dc.type.materialtexten
dc.date.updated2016-04-06T16:15:19Z
local.etdauthor.orcid0000-0003-1476-7264


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record