Show simple item record

dc.creatorRaman, Baranidharan
dc.date.accessioned2012-06-07T23:21:10Z
dc.date.available2012-06-07T23:21:10Z
dc.date.created2003
dc.date.issued2003
dc.identifier.urihttps://hdl.handle.net/1969.1/ETD-TAMU-2003-THESIS-R37
dc.descriptionDue to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to digital@library.tamu.edu, referencing the URI of the item.en
dc.descriptionIncludes bibliographical references (leaves 59-66).en
dc.descriptionIssued also on microfiche from Lange Micrographics.en
dc.description.abstractWhile most of the stable learning algorithms perform well on domains with relevant information, they degrade in the presence of irrelevant or redundant information. Selective or focused learning presents a solution to this problem. Two components of selective learning are selective attention (feature selection) and selective utilization (example selection). In this thesis, we present novel algorithms for feature selection and example selection and present the benefits of these two approaches independently and as a combined scheme. We propose a sequential search filter approach called Subset selection using Case-based Relevance APproach (SCRAP) for identifying and eliminating irrelevant features. The SCRAP filter addresses the problem of finding a feature subset that provides a balance between defining consistent hypotheses and improving prediction accuracy. The SCRAP filter was compared with the RELIEF filter algorithm and was found to perform better on three families of learning algorithms. We also propose the Learning Algorithm using SEarch Ring (LASER) framework to perform example selection for learning algorithms. The LASER framework has two components, an example selection scheme and target learner. Naive Bayes was used as the target learner for our experiments. LASER provides significant improvement in prediction accuracy of the naive Bayes learner compared to the naive Bayes classifier without example selection. Application of both feature and example selection schemes to the naive Bayes learner resulted in better prediction accuracy.en
dc.format.mediumelectronicen
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherTexas A&M University
dc.rightsThis thesis was part of a retrospective digitization project authorized by the Texas A&M University Libraries in 2008. Copyright remains vested with the author(s). It is the user's responsibility to secure permission from the copyright holder(s) for re-use of the work beyond the provision of Fair Use.en
dc.subjectcomputer science.en
dc.subjectMajor computer science.en
dc.titleEnhancing inductive learning with feature selection and example selectionen
dc.typeThesisen
thesis.degree.disciplinecomputer scienceen
thesis.degree.nameM.S.en
thesis.degree.levelMastersen
dc.type.genrethesisen
dc.type.materialtexten
dc.format.digitalOriginreformatted digitalen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

This item and its contents are restricted. If this is your thesis or dissertation, you can make it open-access. This will allow all visitors to view the contents of the thesis.

Request Open Access