NOTE: This item is not available outside the Texas A&M University network. Texas A&M affiliated users who are off campus can access the item through NetID and password authentication or by using TAMU VPN. Non-affiliated individuals should request a copy through their local library's interlibrary loan service.
Optimal control limit policy for a Partially Observable Markov Decision Process model
dc.contributor.advisor | Feldman, Richard M. | |
dc.creator | Lee, Chong Ho | |
dc.date.accessioned | 2024-02-09T20:43:49Z | |
dc.date.available | 2024-02-09T20:43:49Z | |
dc.date.issued | 1994 | |
dc.identifier.uri | https://hdl.handle.net/1969.1/DISSERTATIONS-1554802 | |
dc.description | Vita | en |
dc.description | Major subject: Industrial Engineering | en |
dc.description.abstract | In this research, we consider the problem of determining an optimal replacement policy for stochastically deteriorating systems for which only incomplete state information is available. When the deterioration is governed by a Markov process, such a process is known as a Partially Observable Markov Decision Process, which is a generalization of a completely observable Markov Decision Process. This research investigates a three-state partially observable Markov Decision Process in which only deterioration can occur and for which the only actions possible are to replace or not to replace the machine. The goal of this research is to first prove that a control-limit policy is optimal, and then incorporate such a policy into the policy iteration algorithm given by Sondik, in order to enhance its computational efficiency. Two conditions are presented which guarantee that the optimal replacement policy can be limited to control-limit policies for the partially observable case. One condition is a slight modification of Derman's first condition, and the other one is the same as Derman's second condition. A solution algorithm which adopts the basic idea of Sondik's policy iteration algorithm is proposed. Finally, computational comparisons are carried out to demonstrate the efficiency of the proposed algorithm. | en |
dc.format.extent | vii, 77 leaves | en |
dc.format.medium | electronic | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | eng | |
dc.rights | This thesis was part of a retrospective digitization project authorized by the Texas A&M University Libraries. Copyright remains vested with the author(s). It is the user's responsibility to secure permission from the copyright holder(s) for re-use of the work beyond the provision of Fair Use. | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.subject | Major industrial engineering | en |
dc.title | Optimal control limit policy for a Partially Observable Markov Decision Process model | en |
dc.type | Thesis | en |
thesis.degree.discipline | Industrial Engineering | en |
thesis.degree.grantor | Texas A&M University | en |
thesis.degree.name | Doctor of Philosophy | en |
thesis.degree.name | Ph. D | en |
thesis.degree.level | Doctorial | en |
dc.contributor.committeeMember | Garcia-Diaz, Alberto | |
dc.contributor.committeeMember | Wortman, Martin A. | |
dc.contributor.committeeMember | Morgan, Jeff | |
dc.type.genre | dissertations | en |
dc.type.material | text | en |
dc.format.digitalOrigin | reformatted digital | en |
dc.publisher.digital | Texas A&M University. Libraries | |
dc.identifier.oclc | 34872806 |
Files in this item
This item appears in the following Collection(s)
-
Digitized Theses and Dissertations (1922–2004)
Texas A&M University Theses and Dissertations (1922–2004)
Request Open Access
This item and its contents are restricted. If this is your thesis or dissertation, you can make it open-access. This will allow all visitors to view the contents of the thesis.