Show simple item record

dc.contributor.advisorEksin, Ceyhun
dc.creatorAydin, Sarper
dc.date.accessioned2023-10-12T14:56:49Z
dc.date.created2023-08
dc.date.issued2023-08-03
dc.date.submittedAugust 2023
dc.identifier.urihttps://hdl.handle.net/1969.1/200083
dc.description.abstractThis thesis addresses the present-day challenges in multi-agent autonomous systems to develop a new generation of learning and optimization algorithms in the lack of perfect information. Multi-agent autonomous systems refer to modern technological systems having individual decision-makers, named as agents, interacting with each other and the outer environment. Game theory is a mathematical concept that defines strategic interactions among multiple decision-makers with selfish goals. Traditional game theory focuses on analyzing the solutions or final outcomes, generally referred to as equilibria, of these interactions. Our goal in this work is to design and analyze decentralized strategic learning algorithms that guarantee convergence to game solutions by using local and networked information. In the first part of the thesis, we concentrate on the development and analysis of robust and efficient communication protocols for decentralized best-response type algorithms in which agents take actions that maximize their expected payoffs computed with respect to their individual beliefs. The proposed communication protocols retain the convergence guarantees to pure Nash equilibria in weakly-acyclic games while reducing communication attempts. We verify the effectiveness of the proposed communication protocols on mobile autonomous teams solving the target assignment problem. The second part of the thesis considers the analysis of multi-agent systems in uncertain and dynamic environments. We first design a decentralized fictitious play (DFP) algorithm. In DFP, agents share information only with their current neighbors in a sequence of time-varying networks, keep estimates of other agents’ empirical frequencies, and take actions to maximize their expected utility functions computed with respect to the estimated empirical frequencies. We prove the con-vergence of the DFP to an approximate NE in near-potential games. Next, we propose a novel networked policy learning algorithm for Markov potential games. We show the convergence of parameterized policies to a first-order stationary point in expectation. We discuss the benefits of networked policies compared to independent (reward-based) learning via numerical experiments.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectGame Theory
dc.subjectLearning
dc.subjectOptimization
dc.subjectAutonomous Systems
dc.titleNetworked Game-Theoretical Learning in Autonomous Systems
dc.typeThesis
thesis.degree.departmentIndustrial and Systems Engineering
thesis.degree.disciplineIndustrial Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberGarcia, Alfredo
dc.contributor.committeeMemberKalathil, Dileep
dc.contributor.committeeMemberShahrampour, Shahin
dc.type.materialtext
dc.date.updated2023-10-12T14:56:53Z
local.embargo.terms2025-08-01
local.embargo.lift2025-08-01
local.etdauthor.orcid0000-0001-6351-3071


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record