Undergraduate Research and Capstones
Permanent URI for this community
Browse
Browsing Undergraduate Research and Capstones by Type "Thesis"
Now showing 1 - 20 of 3081
Results Per Page
Sort Options
Item The 2017 AAP Periodontal Classification Guidelines: What Every Dental Office Should be Implementing(2020-04-20) Cornuaud, Taylor; Brown, Allie; Gopffarth, Regan; Kabani, Faizan; Cotter, JaneThe Dental Practice Act outlines the parameters for diagnosis and treatment that dental providers must follow when treating patients. Failure to adhere to these guidelines may result in malpractice or negligence. The new 2017 AAP guidelines provide clinicians with specific criteria to accurately diagnose and treat periodontal disease, reducing the risk of legal action. Historically, clinicians have used probing depths, recession, and radiographs to determine the patient’s periodontal diagnosis. The updated 2017 periodontal classification guidelines base a patient’s periodontal stage on the severity, complexity, extent, and distribution of the measurable amount of destroyed tissue. Additions to the AAP guidelines include separate categories for gingival health, periodontal disease involving implants and systemic health as determining factors of periodontal diagnosis and prognosis. The intentions of the new periodontal classification system is to assess specific factors that may contribute to the complexity of long term case management. Adherence to the 2017 guidelines will result in improved patient outcomes and reduction of risk for litigation for clinicians. The changes and additions made to the AAP classification guidelines enable a more accurate diagnosis for every patient type by providing a more specific assessment of the overall health of the periodontium. The 2017 AAP classification guidelines now address conditions that were previously overlooked and allows for recognition of a healthy patient. The new 2017 AAP classification guidelines provide for a more accurate overall assessment, diagnosis, and treatment of periodontal disease.Item The 21st Century Energy Transition of Individual Countries(2015-09-28) Stearns, Laura N; Jones, Glenn AFossil fuel reserves are finite, projected to peak by mid-century and decline thereafter, yet global demand for energy is set to increase more than 50% by 2030. It is therefore crucial for all nations to transition to largely renewable energy sources by mid-century and beyond. To gauge the extent of this problem, we established three scenarios of projected energy demand throughout the 21st century. One in which only a country’s population changes, a second that reaches 100% access to electricity by 2030, as proposed by the World Bank, and a third wherein countries achieve a ranking of “high” on the HDI by 2100 (i.e. using about 110 GJ of energy per capita per annum). Underdeveloped countries will struggle to provide even basic amounts of energy to their rapidly growing populations. For example, Nigeria will have to triple the amount of energy used by 2030 to provide energy for all, and increase 14-fold by 2100 to improve their HDI. Developing countries (e.g. Venezuela), already have 100% access to energy, but will still deplete their fossil fuel reserves before 2100. While many developed countries (e.g. the United States) use disproportionate amounts of energy. Here we find that renewable energy expansion will not only be critical for all countries to maintain their current levels of energy usage, but will be crucial for underdeveloped countries, such as those in sub-Saharan Africa, if they are to develop through the 21st century while facing skyrocketing population increases and insufficient fossil-fuel reserves.Item A 300 Years Record of Polycyclic Aromatic Hydrocarbons in Lake Botanisk, Copenhagen: A Historical Reconstruction of Combustion Processes in a Scandinavian Urban Lake(2013-02-04) Kopp, Kendra N. Dean; Louchouarn, PatrickLake Botanisk, a small isolated body of water in Copenhagen, Denmark, has remained relatively undisturbed for four centuries, making its sediments an excellent historical archive of past deposition rates of atmospheric contaminants. The concentrations and composition of polycyclic aromatic hydrocarbons (PAHs) measured in a sediment core of Lake Botanisk assisted in reconstructing the historical combustion activities for that region. Source diagnostic ratios indicate that PAHs were primarily derived from pyrogenic rather than petrogenic sources throughout the entire core. Marked increases in PAH concentrations during the pre-industrial era (<1860) trace major geopolitical events of the period (e.g. bombing of Copenhagen by British Navy in late 1700s). A significant rise in combustion-derived PAHs was observed with the start of the industrial revolution, which corresponds to the start of coal imports in Denmark (1860s). The ratio of Retene/(Retene+Chrysene) demonstrates that until ~1860, Copenhagen’s combustion sources were dominated by wood burning. The shift to coal consumption starting in 1860 leads to simultaneous increases in pyrogenic PAHs and isomer ratios (Benzo[b]fluoranthene/Benzo[k]fluoranthene) typical of coal usage. Variations in PAH concentrations and ratios during the 20th Century track the shifts in energy sources (coal to oil, oil to natural gas), major political events such as the oil embargo of 1970s (oil back to coal), as well the implementation of air quality standards and improvements in combustion technologies in recent decades (>1980s). In spite of significant decreases in PAH concentrations since the early-20th century peak, levels still remain 10-fold above preindustrial values in recent decades suggesting an impact from the growth in urban development.Item 3D Acinar Culture for Imaging Acinar DynamicsLeach, Kathryn A; Lele, Tanmay PEpithelial cells line surfaces of the human body. Since they define the boundary between internal and external environments, these cells maintain a polarity, created by localizing proteins at opposite ends of each cell, in order to facilitate transport of substances between these environments. Basal proteins, which face the external environment, and apical proteins, which face the internal environment, define this polarity. When grown in vitro using a 3D culture, epithelial cells group together to form hollow, fluid-filled spheroids of cells called acini. These acini reflect in vivo polarity with basal proteins facing the 3D matrix and the apical proteins facing the fluid-filled interior. In some cancers, the polarity of epithelial cells becomes inverted which may play a role in cancer metastasis. Previous research projects investigated the process responsible for everting the acini and found a variety of mechanisms. This research project builds on past discovery of using RhoA activation to initiate acinar eversion. Activating RhoA increases the activity of myosin, resulting in higher contractility in the cell. This higher contractility was found to disrupt the mechanical equilibrium of the acini, resulting in breakage, breach, and eversion of acini’s polarity as they collapse and flip inside-out. To further investigate this mechanism involving RhoA, this project documents the procedures to form and image acini using previously untested cell lines made from cancerous lung epithelial cells: 344SQ_shCTRL, 344SQ_shZEB1, and 393P_ZEB1. It was found that 344SQ_shCTRL and 344SQ_ZEB1 acini formed lumens. In making the acini, changes to the protocol included adjusting methods for avoiding imaging complications with thick Matrigel layers, shortening the time period allotted for lumen development, and increasing the concentration of RhoA activator used in experimentation. Initial experimental and fixed staining results collected demonstrate the exciting opportunity for many future research projects utilizing acini such as investigating lumen nuclei orientation, quantifying the factors impacting lumen development, and continuing investigation with acinar eversion.Item 3D Additive Metal Extrusion Manufacturing of 316L Stainless Steel(2022-04-20) Ghauri, Hamza; Mahfouz, Ahmed Nabil; Mansoor, BilalThe primary objective of our research is to analyze the properties of parts produced by microwave sintering and conventional thermal sintering on 316L Stainless Steel samples produced by way of Fused Filament Fabrication (FFF) 3D printing. FFF is a novel additive manufacturing technique that promises great reductions to inaccessibility, production costs, and operational training over more established metal additive manufacturing methods such as Metal Injection Molding (MIM), Electron Beam Melting (EBM), and Selective Laser Sintering (SLS). FFF printed parts require a complex heat treatment process to separate the metal particles from the polymeric binder they are embedded and to subsequently fuse together the metal particles. This work investigates some sintering options suited to FFF metal printing and the parameters thereof with respect to the quality of the final sintered piece. Some early studies have shown that with the proper set of sintering parameters the mechanical properties, density and porosity levels of the sintered part can be on par with those manufactured by conventional more costly metallurgical methods. In this present study, various sintering parameters are investigated to optimize the quality of the final part by way of both, conventional thermal sintering, and microwave sintering. The properties of the material in response to the two sintering methods, varying sintering temperatures, and sintering environments are investigated. Results are presented on surface finish, cross-sectional appearance and microstructure, internal porosity, material hardness, compression testing, and elemental composition and distribution analysis.Item 3D Bioprinting of Hydrogel Microparticles Through Conversion of Dual-Extrusion Bioprinters(2022-04-26) Thomas, Jeremy Lee; Gaharwar, AkhileshOver the past decade, additive manufacturing has resulted in significant advances towards fabricating anatomic-size, patient-specific scaffolds for tissue models and regenerative medicine. This can be attributed to the development of advanced bioinks capable of precise deposition of cells and biomaterials. The combination of additive manufacturing with advanced bioinks is enabling researchers to fabricate intricate tissue scaffolds that recreate the complex spatial distributions of cells and bioactive cues found in the human body. However, the expansion of this promising technique has been hampered by the high cost of commercially available bioprinters and proprietary software. In contrast, conventional 3D printing has become increasingly popular with home hobbyists and caused an explosion of both low-cost thermoplastic 3D printers and open-source software to control the printer. In this thesis, we bring these benefits into the field of bioprinting by converting widely available and cost-effective 3D printers into fully functional, open source, and customizable multi-head bioprinters. We demonstrate the practicality of this approach by designing bioprinters customized with multiple extruders, automatic bed leveling, and temperature controls for approximately $400. Type-1 diabetes is a chronic condition in which the pancreas produces little or no insulin, caused by the destruction of the insulin-producing beta cells of the pancreas. Traditional strategies for treating type-1 diabetes, involving beta-cell transplantation or delivery, have shown mixed results due to loss of cell viability and decreased efficacy. Three-dimensional (3D) bioprinting of microgel bioinks has shown potential to circumvent some of these challenges and meet the needs for tissue engineering and cell delivery. The converted bioprinters in this thesis make 3D bioprinting a scalable solution to treating chronic diseases like T1D.Item 3D c-AFM Imaging of Conductive Filaments in HfO2 Resistive Switching DevicesHammock, Sarah Mireille; Shamberger, PatrickResistive switching in metal-insulator-metal devices is promising as an alternative to flash memory. The change in resistance from insulating to conducting and back again is theoretically caused by the formation and partial destruction of conductive filaments composed of a metal or oxide structure. These filaments have a significant effect on device performance and stability. However, due to their small size (nm range) and location within the device, the filaments are difficult to study directly. Various techniques such as TEM, SEM, and c-AFM have been used to obtain 2D or 3D images of the filaments in an attempt to determine their morphology, conductivity, and composition. In this study, AFM and c-AFM were used to investigate the topography and local conductivity of the oxide layer and construct a 3D image of a conductive filament in formed p+Si|HfO2|Cu devices. Damage to the oxide layer was found to vary with both oxide crystallinity and forming voltage, conductive regions were found to be associated with the damaged areas, and the 3D data collected for the filament revealed an hourglass morphology.Item 3D Object Segmentation with Quasi Solid-state LiDARSun, Yifan; Song, DezhenCurrent, most 3D semantic segmentation models for autonomous driving are mainly trained on spinning Light Detection And Ranging (LiDAR) data because spinning LiDAR sensors have been one the most popular sensors for autonomous driving vehicles and there is an abundance of spinning LiDAR dataset available to the public. However, spinning LiDAR sensors are costly and requires large amounts of energy to operate. The newly emerged quasi solid-state LiDAR sensors are more cost efficient and require lower amount of energy to operate on autonomous driving vehicles. If we reuse the current models pretrained with spinning LiDAR data on quasi solid-state LiDAR data, its performance is below expectation. Currently there are not enough quasi solid-state LiDAR data to train 3D semantic segmentation deep learning models effectively, and the data pattern for quasi solid-state LiDAR is mostly different from the spinning LiDAR data. This research will first develop a visualization tool and evaluate the existing 3D semantic segmentation models that are pretrained with spinning LiDAR data on some small scaled quasi solid-state LiDAR data. The performance of the model is under expectation, which calls for the retraining of the 3D semantic segmentation model on quasi solid-state LiDAR. The model chosen is SPVNAS. Since there are few publicly available large scaled quasi solid-state LiDAR datasets, several approaches are taken in parallel to synthesize the suitable dataset for training. The final approach decided was rich point subsampling, which takes a reconstructed scene of rich point cloud, filter out the point that are not in the vehicle’s field of view, and sample the points according to the data pattern for quasi solid-state LiDAR data. The processed data is fed into the SPVNAS model for training, validation, and testing. This final model for 3D Semantic Segmentation is the main product of the research.Item 3D Printing Path Reallocation for Concurrent IDEX SystemsGautreaux, Spencer T; Shell, Dylan3D Printing is a growing field of interest, with research topics and commercial advancements in materials, processes, and systems. One of these advancements is the introduction of Independent Dual Extrusion (IDEX) Fused Deposition Modeling (FDM) printers in both the enterprise and consumer space. The unique feature on these printers is their dual extruders, which allows them to use multiple materials to create a printed part. These two extruders, in collaboration with two hotends, are responsible for the controlled deposition of material. In present systems, only one hotend can operate on the part at a time. However, as implied by the name, the hotends can be positioned independently. Therefore, the ability to utilize the two hotends concurrently could significantly reduce print time, a behavior not presently available. In this document we develop an algorithm to enable Collaborative Dual Extrusion (CODEX) printing, a model in which both hotends can be utilized simultaneously on one part. To do so we outline a two-phase greedy algorithm for transforming an input GCode file, intended for a traditional FDM printer, into one that could be utilized on IDEX printers. This algorithm exploits the sequential nature of GCode to find large runs of concurrently printable 2 segments. These runs are then linked to produce output paths. Approximately 13,500 publicly available GCode files are utilized to test and validate the algorithm across three different conceptual models for IDEX printers. The first model provides a theoretical maximum upper bound on efficiency. The second represents a mechanically feasible model. The final model simulates those IDEX printers available today. We show an approximate 24% and 20% improvement for the first two models, and a 9% deterioration on the final model. The document concludes with a discussion of possible improvements and directions for future work.Item 6+9+12: EVERYONE CAN BE TAUGHT TO DELIBERATELY FAKE A SMILE WITH THE “GENUINE ENJOYMENT DUCHENNE MARKER”(2012-07-11) Shlomo, Benjamin; Balfour, StephenIt is generally believed that people cannot deliberately contract the orbicularis oculi pars orbitalis (“Action Unit 6,” "Duchenne Marker" or "Cheek Raiser") muscle together with the zygomaticus major (“AU12” or "smile" muscle) unless they experience genuine happiness. This experiment tested the theory that both muscles can be activated together voluntarily if one first contracts the levator labii superioris alaeque nasi (“AU9” or "nose wrinkler") muscle. Subjects in isolation followed a set of instructions written and picture instructions to capture themselves posing different facial expressions. The images were then coded for muscular contraction intensity by two people trained in the Facial Action Coding System but unfamiliar with the goals of this study. The results question the use of observing simultaneous orbicularis oculi pars orbitalis and zygomaticus major activation as a sufficient measure of positive affect in various psychological studies of emotion.Item A Case Study of an Education Study Abroad Program in Italy(2019-04-26) Crawford, Sarah; Vela, Jasmine M; Woodson, Laura M; Neshyba, MonicaAs discussed in “Towards a Research Agenda for U.S. Education Abroad” (Ogden, 2015) and in other studies on transformative learning, research can be used to encourage the proliferation and continuation of study abroad programs, particularly for preservice teachers. This research study seeks to address how students define transformative experiences within the Italy Education study abroad program in the College of Education and Human Development. This study employs Mezirow’s transformative learning theory, critical reflection, and high impact educational practices. This study is centered around determining what justifies an education study abroad program as being transformative for students. By completing a case study on a study abroad program, we intend to analyze the components of the trip and see if they aid towards a transformative experience.Item A Common Automation Framework for Cyber-Physical Power System DefenseCope, Ethan; Davis, KatherineAs the power grid becomes more complex and integrated with communication networks, cyber-attacks become an increasingly relevant threat. CYPRES (Cyber Physical Resilient Energy Systems) is a system for simulating cyber and physical attacks on a power grid by comprehensively modeling a cyber-physical hardware-in-the-loop power system. However, the CYPRES system is tedious to spin up, as multiple sub-systems of the framework involve interdependencies across different applications. This paper details a common automation framework for the CYPRES sys- tem that both removes this manual overhead from running the system and drastically improves CYPRES’ initialization speed. With the Jenkins, a continuous integration/continuous development tool, the author has established this automation infrastructure, RESAuto, for CYPRES system and demonstrated the benefits and effectiveness of automating manually intensive sub-systems opera- tions with much simpler operations and less time consumption. This automation system makes it much easier to test more complex threat scenarios for larger, more realistic, and manually intensive scenarios. In addition to this, RESAuto also provides a real-time dashboard that shows simulation status and network traffic, allowing users to monitor RESLab’s distributed computing resources at a glanceItem A Comparison of MuDST and PicoDST Data Analysis MethodsHall, Seth Carlin; Mioduszewski, SaskiaFor STAR-experiment associated research at Brookhaven National Laboratory, the primary objective is the study and characterization of the Quark-Gluon Plasma, or QGP. The data is stored in subsets of the original, full 'Data Summary Tape' or DST format. The PicoDST format is the new, more compact way in which all data from particle collider experiments is stored. Because this data format is relatively new, much of the existing data analysis scripts written by the TAMU group to run on root4star (the modified ROOT software package used for the STAR experiment) work only with MuDST-type files, which is the older format. While extensive documentation exists for both the PicoDST and MuDST formats, not every aspect is exactly analogous, and therefore methods to convert older MuDST-scripts to perform the same analytical functions with PicoDST files must be created. In this paper, the differences between the PicoDST and MuDST formats will be considered, as well as the differences between algorithms written to accomplish the same task with either format. In addition, a significantly higher time-efficiency is demonstrated for a particular PicoDST algorithm when measured against its MuDST counterpart.Item A Computer Vision and Maps Aided Tool for Campus NavigationHall, Alexander James; Sueda, ShinjiroCurrent study abroad trips rely on students utilizing GPS directions and digital maps for navigation. While GPS-based navigation may be more straightforward and easier for some to use than traditional paper maps, studies have shown that GPS-based navigation may be associated with disengagement with the environment, hindering the development of spatial knowledge and development of a mental representation or cognitive map of the area. If one of the outcomes of a study abroad trip is not only to navigate to the location, but also to learn about important features such as urban configurations and architectural style, then there needs to be a better solution than students only following GPS directions. This research introduces one such explored solution being a new feature within wayfinding mobile applications that emphasizes engagement with landmarks during navigation. This feature, powered by computer vision, was integrated into a newly developed wayfinding mobile application, and allows one to take pictures of various Texas A&M University buildings and retrieve information about them. Following the development of the mobile application, a user study was conducted to determine the effects of the presence or absence of this building recognition feature and GPS-based navigation on spatial cognition and cognitive mapping performance. Additionally, the study explores the wayfinding accuracy performance of the building recognition feature and GPS-based navigation compared with traditional paper maps. This paper includes preliminary results where it was found that groups without GPS-based navigation took longer routes to find destinations than those with GPS-based navigation. It was also found that cognitive mapping performance improved for all participants when identifying destination buildings. Final data collection and analysis is planned for April 2022.Item A Dataset for Training and Testing Rendering MethodsBezanson, Katherine Sherran; Kalantari, Nima KThorough physically-based rendering is a computationally taxing and time-consuming process. High-quality rendering requires a significant amount of memory and time to calculate the trajectories and colors of each ray that goes into a pixel, especially as scenes become more complex and more computations are required for clear renders. Each ray used to calculate the color of a pixel requires a large number of calculations to accurately represent the red-green-blue value, taking into account not only the material of the object the ray hits but the surrounding objects and lighting. In a bid to implement this process, Monte Carlo Rendering was developed as an algorithm to flexibly and realistically render an image from a three dimensional scene. This is a common method of rendering now, but it leaves significant flaws in the image if used to truly speed up the process due to the use of random sampling to determine which rays are created and approximate the pixel values. These flaws appear similar to TV static and are called noise. To keep this faster use of the algorithm while continuing to produce high-quality renders, denoising algorithms were developed to take the noisy images rendered by the Monte Carlo algorithm and scrub the noise to recreate a clean version of the render. To be efficient, these denoising algorithms need to be able to avoid using enough memory and time to fully offset the time saved by the low-sample rendering method. This is very difficult, as the accuracy in a render originally came from careful and thorough calculations for each pixel, and so these algorithms are forced to develop an alternate method to fill in the gaps using the approximations done in the noisy image. As this occurred, denoising algorithms evolved to use increasingly elaborate neural networks to negate issues in accuracy and clarity found in previous methods. These neural networks, though essential to faster and cheaper rendering, require extensive training from currently limited datasets. This research aims to expand the pool of data available to test denoising algorithms on renders created from three dimensional scenes as well as test its effectiveness in training current denoising algorithms in conjunction with the current data. The accuracy of the final tests were measured using the mean square error and the peak signal-to-noise ratio, both commonly utilized to objectively evaluate the difference in the control image and the output of the algorithm. It was found that the new data, when used in combination with the current datasets, was effective in improving the results of these algorithms. This supports the idea that larger, and more importantly more diverse, data with distinct characteristics is beneficial for creating increasingly effective denoising algorithms. This is especially true as renderers become more efficient and methods of expressing sundry real world visual phenomena become more accurate. With those improvements, more robust denoising neural networks will be necessary to create professional appearing renders. An extended dataset will allow for neural networks to be trained as accurately as possible to quickly and accurately create renders that can be used for high importance products, such as frames of final versions of movies for animation studios.Item A Flexible Tether Management Model for Heterogeneous Marsupial Robot SystemsDuffy, Carson Wayne; O'Kane, JasonHeterogeneous marsupial robotic systems are systems comprised of two or more robots that collaborate and leverage the strengths of each other to complete missions. These systems consist of one dispensing agent that provides resources to one or more passenger agents. To exchange these resources, whether in the form of power or data, there must be a physical connection between the dispensing agent and each passenger agent, known as a tether. Tethers enable passenger agents to use large power supplies housed on the dispensing agent, which would otherwise be impractical or impossible to house on the passenger agents themselves. Additionally, tethers can provide bidirectional data communication between the dispensing agent and the passenger agent such that the sensing and computation capabilities of each component can be used in one system. Marsupial robot systems consisting of unmanned surface vehicles (USV) and unmanned aerial vehicles (UAV) are effective systems to dispatch in marine environments. These systems use the sensing capabilities of the UAV to explore more of an environment and the power capabilities of the USV to lengthen UAV flight times. The tether connected from the USV to the UAV changes length as the UAV changes positions. Maintaining a proper tether length is crucial to system efficacy. If the tether is too long, it is prone to catching on obstacles in the environment. If it is too short, it may limit the mobility of the UAV. Researchers have explored ways to maintain proper tether length, but also, the potential benefits of prioritizing slackness or tautness in the tension of the tether and the potential benefits that arise from either design choice. This paper proposes a tether management system that prioritizes consistency, reliability, and flexibility by meticulously maintaining a proper tether length while allowing for control over tether slackness and spool reactivity. It seeks to implement aspects of both the slacked and taut models. This system was implemented within a heterogeneous marsupial robot system, with its dispensing agent represented as a ground station and its passenger agent represented by the Duckiebot DB21J UGV. The hardware decisions and modifications for the spool, ground station, and DB21J are discussed in this paper, as well as the overall ROS workspace structure designed to maximize efficiency and data transfer. After implementation, the tether management system was then tested by running a series of trials with varying spool control and tether model parameter values, specifically slackness and control gain. In each of these trials, the DB21J drove in a square path in the environment, and the response of the spool was recorded in each trial. The data was used to determine the consistency of the system and its response to varying parameters. The conclusion drawn from observing the behavior of the tether management system is that the system is designed for consistency and flexibility in spool control and tether length. During trials, slackness and control gain were manually set. In the future, these values are meant to be modified based on environmental factors.Item A Fully Implantable, Miniaturized RFID Platform for Neurosurgical Biomedical Devices(2018-04-23) Pineda, Sergio Sebastian; Park, Sung IHydrocephalus occurs when excessive quantities of cerebrospinal fluid (CSF) accumulate in the ventricles. Current treatment of the condition involves implanting a ventricular shunt, composed of an inflow catheter originating at the site of the obstructed ventricle, a valve, and an outflow catheter that drains the excess fluid into the peritoneal cavity where it can be safely reabsorbed or excreted. This method of treatment is crude and subject to many complications including, but not limited to, infection, blockage, and over-draining. Therefore, the flow of CSF out of the ventricles and into the abdomen must be carefully monitored. Unfortunately, there is currently no effective and non-invasive means of doing so. Here we present the design and partial integration of a prototype for a wireless, fully implantable, miniaturized RFID-based device for monitoring and recording the functional state of a ventricular shunt. The purpose of this device is to provide a minimally-invasive and robust method for digitally interrogating shunt function. Such a platform would, ideally, allow for real-time, wireless monitoring which will serve to inform the patient and their care providers of abnormalities in shunt performance and allow them to take the appropriate measures before further complications can occur.Item A Genetic Screen for High-Copy Suppressors of the Growth Defect of Saccharomyces cerevisiae set1 Null Mutants Under Histidine Starvation Conditions(2018-04-19) Gable, Morgan Danielle; Bryk, MaryPrevious research indicates that Set1 is the catalytically active protein in COMPASS, a protein methyltransferase complex associated with transcription in budding yeast cells. However, the mechanistic role that Set1 and COMPASS plays in the regulation of transcription remains poorly characterized. Current research in the Bryk lab indicates that mono-methylation of histone H3 on lysine 4 (K4) is required for 3-aminotriazole-induced transcription of the HIS3 gene by RNA polymerase II (PolII). The research shows that yeast cells lacking a functional SET1 gene (containing null alleles, either set1Δ or set1-Y967A) grow poorly on medium lacking histidine and containing 3-aminotriazole (3-AT). Overexpression screens are being performed to identify genes that suppress the growth defect of set1 null mutants. Genes when over-expressed are expected to either bypass the need for Set1 or replace Set1 function through interaction with a non-functional Set1 complex. Studying genetic suppressors may uncover clues to the role of SET1 in the Pol II transcription mechanism, providing new information on transcription, a ubiquitous vital process in prokaryotic and eukaryotic cells.Item A Linguistic Analysis to Quantify Over-Explanation and Under-Explanation in Job InterviewsMyscich, Albin Kyle; Chaspari, TheodoraReceiving insight into the thoughts and feelings of a recruiter is vital to understanding effective job interviews. To ascertain categorical responses and speech patterns, audio and visual data from mock job interviews were collected between interviewees and company representatives. From the study, extracted features of audio and visual data were compiled. As a result, several approaches involving deep learning were leveraged to infer the probability of an over-explained or under-explained snippet of text.Item A Metric for Intrinsic Motivation in Reinforcement Learning Agents(2022-04-18) Alam, Yasin Ferdous; Choe, YoonsuckClassically, the reward for an agent is given by extrinsic factors which motivate the agent to improve and learn; however, an active area of research within cognitive science and AI is the effect and necessity of intrinsic motivation for an agent. This can manifest itself in many forms from curiosity to reduction of cognitive dissonance to motivation for effectance. Despite the prevalence and perceived importance of intrinsic motivation, there is no metric to measure “how” intrinsically motivated an agent is compared to another and is instead, largely empirical. Furthermore, methods that might be stated to be intrinsically motivated can be directly linked to the environment and thus, might be less intrinsically motivated than thought. Thus this thesis presents a general metric for intrinsically motivated agents to suggest that highly intrinsically motivated agents are more robust than less intrinsically motivated agents. First, an overview and review of reinforcement learning and intrinsic motivation is presented. Following this, a general metric is proposed with empirical and mathematical justification to measure the intrinsic motivation of an agent. Lastly, several intrinsic motivation agents are tested to evaluate the metric and compare the relative performance of the agents.