Show simple item record

dc.contributor.advisorBraga-Neto, Ulisses
dc.creatorGhane, Parisa
dc.date.accessioned2023-05-26T18:11:32Z
dc.date.available2023-05-26T18:11:32Z
dc.date.created2022-08
dc.date.issued2022-07-27
dc.date.submittedAugust 2022
dc.identifier.urihttps://hdl.handle.net/1969.1/198072
dc.description.abstractIn data-poor environments, it may not be possible to set aside a large enough test data set to produce accurate test-set error estimates. On the other hand, in modern classification applications where training is time and resource intensive, as when training deep neural networks, classification error estimators based on resampling, such as cross-validation and bootstrap, are too computationally expensive, since they require training tens or hundreds of classifiers on resampled versions of the training data. The alternative in this case is to train and test on the same data, without resampling, i.e., to use resubstitution-like error estimators. Here, a family of generalized resubstitution classifier error estimators are proposed and their performance in various scenarios is investigated. This family of error estimators is based on empirical measures. The plain resubstitution error estimator corresponds to choosing the standard empirical measure that puts equal probability mass over each training points. Other choices of empirical measure lead to bolstered resubstitution, posterior-probability, Bayesian error estimators, as well as the newly proposed bolstered posterior-probability error estimators. Empirical results of this dissertation suggest that the generalized resubstitution error estimators are particularly useful in the presence of small sample size for various classification rules. In particular, bolstering led to remarkable improvement in error estimation in the majority of experiments on traditional classifiers as well as modern deep neural networks. Bolstering is a type of data augmentation that systematically generates meaningful samples, primarily through data-driven bolstering parameters. The bolstering parameter for low to average dimensional data was defined based on the Euclidean distance between samples in each class. But Euclidean distance between images is not straightforward and semantically meaningful. Hence, for experiments with image data, parameters of data augmentation were selected in a different fashion. I introduce three approaches to image augmentation, among which weighted augmented data combined with the posterior probability was most effective in predicting the generalization gap in deep learning. For the study of protein turn over, I propose hybrid compartmental models (HCM), that are useful for multi-substrate experiments. Unlike the conventional compartmental models, HCM starts with a partially specified structure for tracer models, estimates the tracer parameters given the data, and finally determines the details of model’s structure by choosing the most physiologically meaningful tracee model among the resulting alternative tracee models. The parameters in the alternatives tracee models are computed by simple mathematical operations on tracer parameters. The proposed HCM was employed to estimate kinetics of Phenylalanine and Tyrosine using tracer-tracee-ratio (TTR) data. Results show that HCM tracer model was able to fit the TTR-time data points, and the best tracee model was selected by comparing the alternative tracee models’ parameters with those reported in the literature.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectClassification
dc.subjectError Estimation
dc.subjectResubstitution
dc.subjectEmpirical Measure
dc.subjectGeneralization
dc.subjectNeural Network
dc.subjectDeep Learning
dc.subjectCompartmental Models
dc.subjectProtein Turnover
dc.titleNovel Approaches in Classification Error Estimation, Predicting Generalization in Deep Learning, and Hybrid Compartmental Models
dc.typeThesis
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberDeutz, Nicolaas
dc.contributor.committeeMemberIvanov, Ivan
dc.contributor.committeeMemberShen, Yang
dc.type.materialtext
dc.date.updated2023-05-26T18:11:33Z
local.etdauthor.orcid0000-0002-5027-5411


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record