Show simple item record

dc.contributor.advisorBanerjee, Amarnath
dc.creatorSharifi, Pouya
dc.date.accessioned2021-04-30T22:26:19Z
dc.date.available2021-04-30T22:26:19Z
dc.date.created2020-12
dc.date.issued2020-11-30
dc.date.submittedDecember 2020
dc.identifier.urihttps://hdl.handle.net/1969.1/192848
dc.description.abstractHigh integration of intermittent renewable energy sources (RES), specifically wind power, has created complexities in power system operations due to their limited controllability and predictability. In addition, large fleets of Electric Vehicles (EVs) are expected to have a large impact on electricity consumption, contributing to the volatility. In this dissertation, a well-coordinated smart charging approach is developed that utilizes the flexibility of EV owners in a way where EVs are used as distributed energy storage units and flexible loads to absorb the fluctuations in the wind power output in a vehicle-to-grid (V2G) setup. Challenges for people participation in V2G, such as battery degradation and insecurity about unexpected trips, are also addressed by using an interactive mechanism in smart grid. First, a static deterministic model is formulated using multi-objective mixed-integer quadratic programming (MIQP) assuming known parameters day ahead of time. Subsequently, a formulation for real-time dynamic schedule is provided using a rolling-horizon with expected value approximation. Simulation experiments demonstrate a significant increase in wind utilization and reduction in charging cost and battery degradation compared to an uncontrolled charging scenario. Formulating the scheduling problem of the EV-wind integrated power system using conventional stochastic programming (SP) approaches is challenging due to the presence of many uncertain parameters with unknown underlying distributions, such as wind, price, and different commuting patterns of EV owners. To alleviate the problem, a model-free Reinforcement Learning (RL) algorithm integrated with deterministic optimization is proposed that can be applied on many multi-stage stochastic problems while mitigating some of the challenges of conventional SP methods (e.g., large scenario tree, computational complexity) as well as the challenges in model-free RL (e.g., slow convergence, unstable learning in dynamic environment). The simulation results of applying the combined approach on the EV scheduling problem demonstrate the effectiveness of the RL-Optimization method in solving the multi-stage EV charge/discharge scheduling problem. The proposed methods perform better than standard RL approaches (e.g., DDQN) in terms of convergence speed and finding the global optima. Moreover, to address the curse of dimensionality issue in RL with large action-state space, a heuristic EV fleet charging/discharging scheme is used combined with RL-optimization approach to solve the EV scheduling problem for a large number of EVs.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectOptimizationen
dc.subjectSmart EV Charge/Discharge Schedulingen
dc.subjectReinforcement Learningen
dc.subjectWind Energyen
dc.titleA Novel Reinforcement Learning-Optimization Approach for Integrating Wind Energy to Power System with Vehicle-to-Grid Technologyen
dc.typeThesisen
thesis.degree.departmentIndustrial and Systems Engineeringen
thesis.degree.disciplineIndustrial Engineeringen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberNtaimo, Lewis
dc.contributor.committeeMemberGautam, Natarajan
dc.contributor.committeeMemberShetty, Bala
dc.type.materialtexten
dc.date.updated2021-04-30T22:26:20Z
local.etdauthor.orcid0000-0001-7644-0545


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record