Show simple item record

dc.contributor.advisorYan, Wei
dc.creatorAlawadhi, Mohammad S A A M
dc.date.accessioned2023-05-26T18:03:19Z
dc.date.created2022-08
dc.date.issued2022-07-28
dc.date.submittedAugust 2022
dc.identifier.urihttps://hdl.handle.net/1969.1/197969
dc.description.abstractThis research finds that virtual buildings that are automatically generated using parametric building information modeling (BIM) can provide high-quality training data for teaching an artificial intelligence (AI) to recognize building objects in the real world. Recent developments in AI through deep learning offer a new paradigm and opportunities for the field of architecture, one of which is training deep artificial neural networks (ANNs) for visually understanding the built environment. Teaching AI machines to detect building objects in photos and videos is the foundation toward achieving AI-assisted 3D reconstruction of existing buildings, which would be useful for various applications including site surveys, construction documentation, and performance modeling for green architecture. However, there exists the challenge of acquiring enough training data for machine learning, and this data is typically manually curated and annotated by people, which is time-consuming—that is, unless a computer machine can generate high-quality data to train itself for a certain task. In that vein, this research trained ANNs solely on realistic images of 3D building information models that were parametrically and automatically generated. Synthetic data generation methods provide virtually unlimited training data for deep learning. This research investigated a hybrid methodology—using BIM and photorealistic rendering—for synthesizing training datasets for object recognition in photos instead of manually labeling data. The application of this methodology was studied with photogrammetry for BIM model inference from survey photos. Then, the use of a parametric BIM framework was explored to parametrically generate 3D BIM models for training ANNs to recognize real-world building objects. Afterward, experiments used the resultant synthetic datasets to train various state-of-the-art ANNs which were tested on real-world photos. The outcomes showed that BIM and photorealistic rendering can be used to generate high-quality training data, and that ANNs trained with the synthetic data can be used to identify building objects without using photos in the training data. The testing results demonstrated good semantic segmentation evaluation scores of a parametric BIM-trained ANN which achieved 89.64% average accuracy and 0.517 mean intersection-over-union (mIoU) on a test case. It also achieved over 80% accuracy and over 0.5 mIoU on both hand-picked and randomly sampled sets of arbitrary photos. The results demonstrated generalizability on real-world photos of buildings, which is significant for the future of training AI with generated data for solving real-world architectural problems.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectartificial intelligence (AI)
dc.subjectbuilding information modeling (BIM)
dc.subjectcomputer-generated imagery (CGI)
dc.subjectdeep learning
dc.subjectneural network
dc.subjectparametric modeling
dc.subjectphotorealistic rendering
dc.titleTeaching an Artificial Intelligence with Generated Virtual Buildings for Real-World Recognition
dc.typeThesis
thesis.degree.departmentArchitecture
thesis.degree.disciplineArchitecture
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberClayton, Mark J.
dc.contributor.committeeMemberCaffey, Stephen
dc.contributor.committeeMemberJiang, Anxiao
dc.type.materialtext
dc.date.updated2023-05-26T18:03:19Z
local.embargo.terms2024-08-01
local.embargo.lift2024-08-01
local.etdauthor.orcid0000-0002-1582-6652


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record