*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Game Theory and Machine Learning Based Energy Trading
Committee:
Dr. Mavris, Advisor
Dr. Romberg, Co-Advisor
Dr. Vachtsevanos, Chair
Dr. Vamvoudakis
Abstract:
The objective of the proposed research is to provide an energy trading mechanism for distributed load. It is enabled by a recent modification in the market operators' policy. With the presence of an intermediate entity, the aggregator, distributed load can now sell energy and related services to the market. Acting as a broker, the aggregator creates a new market which we design as a Stackelberg competition. By providing price-based incentives, the aggregator makes profit from selling the energy at a higher price (price arbitrage). The introduced competition enhances the overall system's operation by reshaping the demand profile, providing ancillary services to the grid, reducing network congestion risk (correlated to marginal price increase) and contributing to carbon dioxide emission reduction. We propose a non-zero sum Stackelberg game and provide guarantees for the existence of the game's equilibria. The structure of the game allows us to extract the players' strategies in closed form. Our contribution is a game-theoretic framework that allows for both purchasing and selling of energy to the market using a price-based demand-response. We guarantee non-negative payoffs and provide the option for customers to opt-out. Our solution has theoretical guarantees on feasibility and optimality. However, uncertainty present in the market such as changes in players' strategies, introduction of new players, demand privacy, market clearing mechanism, etc. limits our offline approach. We direct our future research into incorporating uncertainty in these games by using online adaptive methods that converge approximately to the offline solution. Because of the nature of the uncertainties, we are driven towards a reinforcement learning, model-free approach that can balance exploitation of past knowledge and exploration of new.