dc.creator | Edwards, Matthew Ryan | |
dc.date.accessioned | 2015-06-30T14:02:43Z | |
dc.date.available | 2015-06-30T14:02:43Z | |
dc.date.created | 2015-05 | |
dc.date.issued | 2014-09-17 | |
dc.date.submitted | May 2015 | |
dc.identifier.uri | https://hdl.handle.net/1969.1/154508 | |
dc.description.abstract | The problem of multi-robot patrol is a growing field of study that focuses on the problem of coordinating teams of robots to optimally patrol a perimeter or area. In this paper, we propose a new method of generating patrolling policies in the form of Markov chains via the Metropolis-Hastings algorithm. Our proposed method generates non-deterministic patrolling policies with the purpose of minimizing the probability of adversarial attack to a given area. We compare our method to a wide variety of approaches to patrolling methods on a large set of graphs in order to test the effectiveness of Markov chains as a patrolling policy. | en |
dc.format.mimetype | application/pdf | |
dc.subject | Patrol | en |
dc.subject | Multi-Robot | en |
dc.subject | Metropolis-Hastings | en |
dc.title | Multi-robot Patrol via the Metropolis-Hastings Algorithm | en |
dc.type | Thesis | en |
thesis.degree.department | Computer Science and Engineering | en |
thesis.degree.discipline | Computer Sci. & Engr | en |
thesis.degree.grantor | Honors and Undergraduate Research | en |
dc.contributor.committeeMember | Shell, Dylan | |
dc.type.material | text | en |
dc.date.updated | 2015-06-30T14:02:43Z | |