NOTE: This item is not available outside the Texas A&M University network. Texas A&M affiliated users who are off campus can access the item through NetID and password authentication or by using TAMU VPN. Non-affiliated individuals should request a copy through their local library's interlibrary loan service.
Optimal replacement policy for a Partially Observable Markov Decision Process model
dc.contributor.advisor | Feldman, Richard M. | |
dc.creator | Kim, Chang Eun | |
dc.date.accessioned | 2020-09-02T21:04:00Z | |
dc.date.available | 2020-09-02T21:04:00Z | |
dc.date.issued | 1986 | |
dc.identifier.uri | https://hdl.handle.net/1969.1/DISSERTATIONS-19854 | |
dc.description | Typescript (photocopy). | en |
dc.description.abstract | The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop optimal replacement policies under a new approach which has the potential for solving other problems dealing with continuous state space Markov chains. Finally, computational comparisons are carried out to demonstrate the efficiency of the proposed algorithm. | en |
dc.format.extent | viii, 70 leaves | en |
dc.format.medium | electronic | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | eng | |
dc.rights | This thesis was part of a retrospective digitization project authorized by the Texas A&M University Libraries. Copyright remains vested with the author(s). It is the user's responsibility to secure permission from the copyright holder(s) for re-use of the work beyond the provision of Fair Use. | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.subject | Major industrial engineering | en |
dc.subject.classification | 1986 Dissertation K487 | |
dc.subject.lcsh | Mathematical optimization | en |
dc.subject.lcsh | Markov processes | en |
dc.title | Optimal replacement policy for a Partially Observable Markov Decision Process model | en |
dc.type | Thesis | en |
thesis.degree.grantor | Texas A&M University | en |
thesis.degree.name | Doctor of Philosophy | en |
thesis.degree.name | Ph. D | en |
dc.contributor.committeeMember | Curry, Guy L. | |
dc.contributor.committeeMember | Deuermeyer, Bryan L. | |
dc.contributor.committeeMember | Freund, Rudolf J. | |
dc.type.genre | dissertations | en |
dc.type.material | text | en |
dc.format.digitalOrigin | reformatted digital | en |
dc.publisher.digital | Texas A&M University. Libraries | |
dc.identifier.oclc | 17829223 |
Files in this item
This item appears in the following Collection(s)
-
Digitized Theses and Dissertations (1922–2004)
Texas A&M University Theses and Dissertations (1922–2004)
Request Open Access
This item and its contents are restricted. If this is your thesis or dissertation, you can make it open-access. This will allow all visitors to view the contents of the thesis.