The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
Indoor Autonomous Mobile Surveillance Robot Based on Robot Operating System and Kinect Sensor
Abstract
Today, security is one of the most important concerns everywhere, from individuals to the military. Various security devices were developed and used to improve the security level. One of the popular security devices is a surveillance camera. However, most surveillance cameras are mounted on the ceiling, which causes blind spots and limits recognizing people. This thesis pro-poses an indoor autonomous mobile surveillance robot that can compensate for the disadvantages of fixed cameras and improve the security level.
The surveillance robot was developed based on Robot Operating System (ROS) and a Kinect sensor. Its control system was designed with proportional-integral (PI) and proportional-integral-derivative (PID) controllers based on its dynamic model. As key functions of the robot, Simultaneous Localization and Mapping (SLAM), autonomous navigation, face recognition, and human tracking were implemented in the robot based on Gmapping, Real-Time Appearance-Based Map-ping (RTAB-Map), ROS navigation, OpenNI2/NiTE2, and Dlib’s face recognition.
In this thesis, experiments on velocity control, SLAM, autonomous navigation, face recognition, and human tracking were conducted. The velocity control system controlled the linear and angular velocity with a small oscillation and no steady-state error. In SLAM, the robot success-fully created indoor maps using 2D and 3D SLAM, which were accurate enough to be used for autonomous navigation. In autonomous navigation, the robot maintained the designated safety distance of 0.32 m or more from obstacles and moved toward the destination along the planned paths. The final position and orientation error averages were 0.21 m and 0.14 rad. The robot’s face recognition recognized a human face at a distance of 1.2–5.5 m. The robot’s human tracking reliably detected a target at a distance of 1.0–4.2 m and maintained the desired position without tracking loss.
Subject
Autonomous navigationDigital control
Face recognition
Human tracking
RGB-D sensor
ROS
SLAM
Surveillance robot
Citation
Kang, Kyeongmo (2023). Indoor Autonomous Mobile Surveillance Robot Based on Robot Operating System and Kinect Sensor. Master's thesis, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /199717.