Show simple item record

dc.contributor.advisorShakkottai, Srinivas
dc.creatorOh, Jahun
dc.date.accessioned2023-10-12T13:54:51Z
dc.date.available2023-10-12T13:54:51Z
dc.date.created2023-08
dc.date.issued2023-07-10
dc.date.submittedAugust 2023
dc.identifier.urihttps://hdl.handle.net/1969.1/199830
dc.description.abstractIn the case of resource-limited robots, such as low-power drones or 4-wheel vehicles, there can be a lack of sufficient onboard computation capabilities or battery limitations for implementing precise navigation models. One plausible solution to this issue is the use of cloud robotics, which could assign tasks to the cloud. However, when communicating with the cloud through congested wireless networks, latency or data loss may happen. Furthermore, when computations are excessively dependent on the cloud, bottlenecks could occur, adversely affecting the performance of navigation tasks for which real-time communications are necessary. To tackle this challenge, this research proposes a cloud-supported navigation system for resource-limited robots, which utilizes deep reinforcement learning to optimize strategies for offloading tasks to the cloud. This approach aims to control the quality of information obtained by the robot while minimizing the impact of potential communication issues. The offloading problem is formulated as a Markov Decision Process (MDP), and a reinforcement learning (RL) algorithm is applied to learn the optimized offloading policy. This method ensures efficient navigation decision-making even under constraints related to computation and energy resources. The proposed system is assessed through the use of pre-built navigation models within the ROS navigation stack, with experiments carried out in both simulation and real-world settings. At the end of the research, this paper intends to provide an evaluation of the system’s capabilities and limitations in various scenarios. The outcomes of these evaluations should indicate that the suggested system can substantially enhance navigation performance while concurrently reducing the expenses associated with cloud communication, which ultimately will provide for more efficient and reliable robotic navigation operations in different environments.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectFog Robotics
dc.subjectCloud Robotics
dc.subjectReinforcement Learning
dc.subjectResource Allocation
dc.subjectNetwork Offloading
dc.titleNetwork Offloading Policies for Cloud Robotics: Enhanced Situation Aware Robot Navigation Using Deep Reinforcement Learning
dc.typeThesis
thesis.degree.departmentComputer Science and Engineering
thesis.degree.disciplineComputer Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameMaster of Science
thesis.degree.levelMasters
dc.contributor.committeeMemberChoe, Yoonsuck
dc.contributor.committeeMemberSeo, Jinsil
dc.type.materialtext
dc.date.updated2023-10-12T13:54:51Z
local.etdauthor.orcid0009-0000-7017-4020


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record