Session Opening-remarks-2

Opening Remarks

Conference
2:00 PM — 2:10 PM UTC
Local
Nov 12 Thu, 9:00 AM — 9:10 AM EST

Session Chair

Klara Nahrstedt (UIUC) & Oliver Kosut (ASU) <br> Zoom Room Host(s): Lamia Varawala, Ezzeldin Shereen (KTH)

Session Keynote-2

Keynote 2

Conference
2:10 PM — 3:00 PM UTC
Local
Nov 12 Thu, 9:10 AM — 10:00 AM EST

Analytics-Driven Cyber-Physical Security for a Converged Smart Grid

Deepa Kundur (U. Toronto)

6
The field of cyber-physical security has evolved greatly over the last decade especially in the context of critical infrastructures such as the smart grid. The current challenges aim to address the increased sophistication of cyberattacks in the context of a more automated grid. Emerging polymorphic and stealthy attacks necessitate more coordinated and intelligent approaches to mitigation. In addition to the typical defence-in-depth paradigm, more harmonized protection and resilience strategies are essential. Development of next-generation tools for cyber-physical security requires the adoption of effective models that are compatible with salient trends in smart grid infrastructure including Information Technology/Operational Technology (IT/OT) convergence. The resulting data-rich cyber-physical environment from IT/OT convergence suggests a strong need for greater data-driven modelling paradigms and analytics. In this talk, we provide examples of deep learning in the context of anomaly detection for the cyber-physical protection of transmission protection systems. We then present a brave new world of opportunities for smart grid cyber-physical security using a data analytics-driven approach.

Session Chair

Klara Nahrstedt (UIUC) <br> Zoom Room Host(s): Lamia Varawala, Ezzeldin Shereen (KTH)

Session A2

Analytics 2: Machine Learning for Grid Operations

Conference
3:10 PM — 4:00 PM UTC
Local
Nov 12 Thu, 10:10 AM — 11:00 AM EST

Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks (Best Student Paper Award Nominee)

Andreas Venzke (Technical University of Denmark (DTU), Denmark); Guannan Qu and Steven Low (California Institute of Technology, USA); Spyros Chatzivasileiadis (Technical University of Denmark, Denmark)

1
This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example. Neural networks have the potential to substantially reduce the computing time of OPF solutions. However, the lack of guarantees for their worst-case performance remains a major barrier for their adoption in practice. This work aims to remove this barrier. We formulate mixed-integer linear programs to obtain worst-case guarantees for neural network predictions related to (i) maximum constraint violations, (ii) maximum distances between predicted and optimal decision variables, and (iii) maximum sub-optimality. We demonstrate our methods on a range of PGLib-OPF networks up to 300 buses. We show that the worst-case guarantees can be up to one order of magnitude larger than the empirical lower bounds calculated with conventional methods. More importantly, we show that the worst-case predictions appear at the boundaries of the training input domain, and we demonstrate how we can systematically reduce the worst-case guarantees by training on a larger input domain than the domain they are evaluated on.

Learning Optimal Solutions for Extremely Fast AC Optimal Power Flow

Ahmed S. Zamzam (National Renewable Energy Laboratory, USA); Kyri Baker (University of Colorado, Boulder, USA)

3
We develop, in this paper, a machine learning approach to optimize the real-time operation of electric power grids. In particular, we learn feasible solutions to the AC optimal power flow (OPF) problem with negligible optimality gaps. The AC OPF problem aims at identifying optimal operational conditions of the power grids that minimize power losses and/or generation costs. Due to the computational challenges with solving this nonconvex problem, many efforts have focused on linearizing or approximating the problem in order to solve the AC OPF on faster timescales. However, many of these approximations can be fairly poor representations of the actual system state and still require solving an optimization problem, which can be time consuming for large networks. In this work, we learn a mapping between the system loading and optimal generation values, enabling us to find near-optimal and feasible AC OPF solutions. This allows us to bypass solving the traditionally nonconvex AC OPF problem, resulting in a significant decrease in computational burden for grid operators.

LENARD: Lightweight ENsemble LeARner for MeDium-term Electricity Consumption Prediction

Onat Gungor (University of California San Diego & San Diego State University, USA); Jake Garnier (University of California San Diego, USA); Tajana Rosing (University of California, San Diego, USA); Baris Aksanli (San Diego State University, USA)

1
In this work, we propose a lightweight ensemble learner for individual house level electricity consumption prediction. We first implement five different prediction algorithms: ARIMA, Holt-Winters, TESLA, LSTM and Persistence. Among single prediction algorithms, LSTM performs best with 0.0195 MSE value on average. Then, we combine these predictions using neural network based ensemble learner which improves performance of best algorithm (LSTM) on average by 72.84% and by up to 99.13%. Finally, we apply pruning to the weights of our ensemble network to decrease the computational cost of our model. Applying pruning leads to 10.9% less error and 27% fewer number of parameters. We show that our pruned ensemble learner outperforms state-of-the-art ensemble methods.

Generative Adversarial Networks and Transfer Learning for Non-Intrusive Load Monitoring in Smart Grids (Best Paper Award Nominee)

Awadelrahman Mohamedelsadig A. Ahmed, Yan Zhang and Frank Eliassen (University of Oslo, Norway)

4
Non-intrusive load monitoring (NILM) objective is to disaggregate the total power consumption of a building into individual appliance-level profiles. This gives insights to consumers to efficiently use energy and realizes smart grid efficiency outcomes. While many studies focus on achieving accurate models, few of them address the models generalizability. This paper proposes two approaches based on generative adversarial networks to achieve high-accuracy load disaggregation. Concurrently, the paper addresses the model generalizability in two ways, the first is by transfer learning by parameter sharing and the other is by learning compact common representations between source and target domains. This paper also quantitatively evaluate the worth of these transfer learning approaches based on the similarity between the source and target domains. The models are evaluated on three open-access datasets and outperformed recent machine-learning methods.

Session Chair

Jinsub Kim (Oregon State University) <br> Zoom Room Host(s): Manish Singh (Virginia Tech)

Session C2

Control 2: Advanced Controls

Conference
3:10 PM — 4:00 PM UTC
Local
Nov 12 Thu, 10:10 AM — 11:00 AM EST

Enabling Online, Dynamic Remedial Action Schemes by Reducing the Corrective Control Search Space

Shamina Hossain-McKenzie and Eric Vugrin (Sandia National Laboratories, USA); Katherine Davis (Texas A&M University, USA)

0
To combat dynamic, cyber-physical disturbances in the electric grid, online and adaptive remedial action schemes (RASs) are needed to achieve fast and effective response. However, a major challenge lies in reducing the computational burden of analyses needed to inform selection of appropriate controls. This paper proposes the use of a role and interaction discovery (RID) algorithm that leverages control sensitivities to gain insight into the controller roles and support groups. Using these results, a procedure is developed to reduce the control search space to reduce computation time while achieving effective control response. A case study is presented that considers corrective line switching to mitigate geomagnetically induced current (GIC) -saturated reactive power losses in a 20-bus test system. Results demonstrated both significant reduction of both the control search space and reactive power losses using the RID approach.

Data-driven Pricing and Control for Low Carbon V2G Charging Station with Balancing Services

Monica Hernandez Cedillo (University of Durham, United Kingdom (Great Britain)); Hongjian Sun (Durham University, United Kingdom (Great Britain))

1
The transition to a low carbon transportation system has brought many challenges for researchers, one major challenge is how to ensure power system reliability as a result of high load demands to supply energy to Electric Vehicles (EVs) while coping with increasing distributed and renewable sources of energy. Consequently, energy management strategies have become very important in the future smart grid design. An aggregator could play a critical role when integrating management strategies between EVs and the grid, based on emerging market opportunities and different variables from the stakeholders involved such as EV requirements, balancing services and profitability of the Charging Station (CS). This paper proposes a data-driven optimisation algorithm with pricing and control modules that communicate with each other to achieve a successful integration with the grid by charging at the right price and at the right time. The results show customers can be positively engaged with pricing signals while providing support to the power system. In conclusion, this paper can be used as a foundation to a commercial CS that may enhance an effective integration of EVs with the grid.

Mitigating Cascading Failures via Local Responses

Chen Liang and Fengyu Zhou (California Institute of Technology, USA); Alessandro Zocca (Vrije Universiteit Amsterdam, The Netherlands); Steven Low and Adam C Wierman (California Institute of Technology, USA)

0
This work proposes an approach for failure mitigation in power systems via corrective control named Optimal Injection Adjustment (OIA). In contrast to classical approaches, which focus on minimizing load loss, OIA aims to minimize the post-contingency flow deviations by adjusting node power injections in response to failures. We prove that the optimal control actions obtained from OIA are localized around the original failure and use numerical simulations to highlight that OIA achieves near-optimal control costs despite using localized control actions.

Real-Time Distributed Control of Smart Inverters for Network-level Optimization

Rabayet Sadnan and Anamika Dubey (Washington State University, USA)

2
The limitations of centralized optimization methods in managing electric power distribution systems operations have led to the distributed paradigm of computing and decision-making. Unfortunately, the existing distributed optimization algorithms are limited in their applicability to managing fast varying phenomena such as those resulting from highly variable Distributed Energy Resource (DER) generation patterns. They require a large number of communication rounds (in the order of 10^2 to 10^3) among the computing agents to solve one instance of the optimization problem. Related real-time distributed control methods are equally limited in their applications to power distribution systems with fast-changing DER generation; they require hundreds of rounds of communication and thus are slow in tracking the network-level optimal solutions. In this paper, we propose a novel distributed voltage controller that provides a fast-tracking of rapidly varying DER generation profiles while simultaneously converging to network-level optimal solutions within a few communication rounds. The proposed control algorithm leverages the radial topology of the system, which reduces the required communication rounds to reach the network-level optimum solution by order of magnitude. The novelty lies in carefully reducing the electrical network model from the perspective of each distributed controller and enabling appropriate data sharing among upstream and downstream nodes to achieve fast convergence. The simulation results demonstrate the effectiveness of the proposed approach in minimizing the feeder losses while maintaining the node voltage within the pre-specified limits.

Distributed Inter-Area Oscillation Damping Control via Dynamic Average Consensus Algorithm

Pablo U Macedo (University of Tennessee at Chattanooga & ConnectSmart Research Laboratory, USA); Shailesh Wasti and Vahid Disfani (University of Tennessee at Chattanooga, USA)

1
Massive deployment of distributed energy resources (DER) through zero-inertia power electronic converters has made the power grid vulnerable to frequency instabilities and in particular inter-area oscillations. Low-frequency oscillations are of major concerns as they have the potential to limit maximum power transfer, and even cause blackouts. This paper presents a novel distributed control algorithm called distributed frequency deviation control (DFDC) based on local frequency deviation from the estimate of the average frequency of the network. The efficacy of the control unit is demonstrated via modal analyses and time-domain simulations. The results show that all the inter-area oscillation modes are damped for all the test cases without affecting the other dynamic modes of the system.

Session Chair

Hongjian Sun (Durham Univ.) <br> Zoom Room Host(s): Jude Battista (UIUC), Nathaniel Tucker (UC Riverside)

Session N2

Networking 2: Communications for Distributed System Management

Conference
3:10 PM — 4:00 PM UTC
Local
Nov 12 Thu, 10:10 AM — 11:00 AM EST

Decentralized Microgrid Energy Management: A Multi-Agent Correlated Q-Learning Approach

Hao Zhou and Melike Erol-Kantarci (University of Ottawa, Canada)

5
Microgrids (MG) are anticipated to be important players in the future smart grid. For proper operation of MGs an Energy Management System (EMS) is essential. The EMS of an MG could be rather complicated when renewable energy resources (RER), energy storage system (ESS) and demand side management (DSM) need to be orchestrated. Furthermore, these systems may belong to different entities and competition may exist between them. Nash equilibrium is most commonly used for coordination of such entities however the convergence and existence of Nash equilibrium can not always be guaranteed. To this end, we use the correlated equilibrium to coordinate agents, whose convergence can be guaranteed. In this paper, we build an energy trading model based on mid-market rate, and propose a correlated Q-learning (CEQ) algorithm to maximize the revenue of each agent. Our results show that CEQ is able to balance the revenue of agents without harming total benefit. In addition, compared with Q-learning without correlation, CEQ could save 19.3% cost for the DSM agent and 44.2% more benefits for the ESS agent.

Achieving Sensor Identification and Data Flow Integrity in Critical Cyber-Physical Infrastructures

Abel O Gomez Rivera (University of Texas at El Paso, USA); Deepak K Tosh (University of Texas, El Paso, USA); Jaime C Acosta (US Army Research Laboratory, USA); Laurent Njilla (Air Force Research Laboratory, USA)

1
Supervisory Control and Data Acquisition (SCADA) systems are commonly found at National Critical Infrastructures that provide necessary cyber-enable services (e.g., energy) for society. In state-of-art SCADA systems, the physical process is monitored by field sensors, which transmit data to a SCADA master. In general, SCADA communication protocols lack proper security mechanisms to protect the integrity and security of field sensors. Thus field sensors are commonly vulnerable to standard cyber-attacks, e.g., data and identity spoofing. Field sensors are low-end devices, and state-of-art crypto solutions are not suitable. In this paper, we discuss a novel lightweight hardware-based security mechanism, namely, Physical Unclonable Functions (PUFs). We introduce an SRAM-based PUF mechanism, which then we use to design an SRAM PUF Authentication and Integrity (SPAI) protocol. The SPAI protocol aims to ensure the integrity of data flow and protect the identity of field sensors. A prototype of the protocol has been implemented in a Raspberry Pi 3 Model B, an SRAM, and a temperature sensor. We describe how a SCADA system emulation is vulnerable to a man-in-the-middle attack using standard eavesdropping techniques. Then we show how our proposed SPAI protocol can prevent the man-in-the-middle attack through the embedded PUF mechanism.

Toward a Service-Oriented Broker Architecture for the Distribution Grid

Mohamadou B Bah, Rabab Haider, Venkatesh Venkataramanan and Anuradha Annaswamy (Massachusetts Institute of Technology, USA)

2
Distributed energy resources (DERs) are attractive because of their flexibility and demand response capabilities; however, there are numerous challenges concerning their integration into the electric grid - including lack of visibility and control, as well as misalignments in the interests of privately-owned DERs with respect to the collective interests of the grid. In order to coordinate the behavior of these DERs, we treat the grid as a multiagent system and propose a service-oriented broker architecture (SOBA). SOBA enables the behavior of privately owned DERs to be influenced by system operators through service requests, and autonomous peer discovery. We illustrate SOBA's features and motivate service requests through a scenario with a network of small-scale solar photovoltaics, inverters, and batteries.

Development of HELICS-Based High-Performance Cyber-Physical Co-Simulation Framework for Distributed Energy Resources Applications

Jianhua Zhang (Clarkson University, USA)

0
The rapid growth of distributed energy resources (DERs) has prompted increasing interest in the monitoring and control of DERs through hybrid smart grid communications resulting in the typical smart grid cyber-physical system. To fully understand the interdependency between them, we propose to integrate the Network Simulator 3 (NS3) into the High Engine for Large-Scale Infrastructure Co-Simulation (HELICS), a new open-source, cyber-physical-energy co-simulation platform. This paper aims to the development and case study of the HELICS-based high performance distribution-communication co-simulation framework for the DER coordination. The novel co-simulation framework for the NS3 integrating into the HELICS is developed. The DER monitoring application about hybrid smart grid communication network design is simulated and validated on this proposed HELICS-based cyber-physical co-simulation platform.

Communication and Computation Resource Allocation and Offloading for Edge Intelligence Enabled Fault Detection System in Smart Grid

Qiyue Li (Hefei University of Technology, Hefei, China); Yuxing Deng, Wei Sun and Weitao Li (Hefei University of Technology, China)

0
Smart grids have various capabilities to meet the electricity demand of modern people in production and life, and real-time monitoring of smart grids is critical to the enhancement of reliability and operational efficiency of power utilities. With the development of artificial intelligence technology and cloud computing, several researches have been proposed to use the powerful computing ability of the cloud to design fault detection systems based on deep learning. However, due to the transmission delay of the Internet backbone and the huge amount of uploading data of the system, it will cause problems such as large bandwidth load and poor real-time feedback of the cloud platform. For realizing a decentralized system, there is a strong need to embed intelligence at the edge of the network. In this paper, we propose an edge computing assisted smart grid fault detection system that uses a lightweight neural network embedded device, which is placed near the edge of monitored equipments to implement real-time monitoring. In addition, considering the limited communication resource, relatively low computation capabilities, as well as different monitoring accuracy of edge devices, we design an optimal allocation method of communication and computation resources, which can maximize the throughput of the system, improve resource utilization of the system while meeting the requirements of data transmission and processing delay. Finally, simulation experiments are carried out to show that compared with other structures of smart grid fault detection system our proposed system can transmit more data while meeting the requirements of the delay bound, reduce the time required for transmission, and enhance the real-time nature of smart grid fault detection systems.

Session Chair

Sarada Prasad Gochhayat (Univ. of Padua) <br> Zoom Room Host(s): Xinyi Wang (Cornell)

Session A3

Analytics 3: Data Analytics for Grid I

Conference
4:10 PM — 5:00 PM UTC
Local
Nov 12 Thu, 11:10 AM — 12:00 PM EST

Seasonal Self-Evolving Neural Networks Based Short-term Wind Farm Generation Forecast

Yunchuan Liu (University of Nevada Reno, USA); Amir Ghasemkhani (California State University San Bernardino, USA); Lei Yang (University of Nevada, Reno, USA); Jun Zhao (Nanyang Technological University, Singapore); Junshan Zhang (Arizona State University, USA); Vijay Vittal (Ira A. Fulton Chair, USA)

2
This paper studies short-term wind farm generation forecast. From the real wind generation data, we observe that wind farm generation exhibits the non-stationarity and the seasonality and that the dynamics of non-ramp, ramp-up, and ramp-down events are different across different classes of wind turbines. To deal with such heterogeneous dynamics of wind farm generation, we propose seasonal self-evolving neural networks based short-term wind farm generation forecast. The proposed approach first classifies the historical data into ramp-up and ramp-down datasets and non-ramp datasets for different seasons, and then trains different neural networks for each dataset to capture different wind farm power dynamics. To account for the non-stationarity as well as reduce the burden of hyperparameter tuning, we leverage NeuroEvolution of Augmenting Topologies (NEAT) to train neural networks, which evolves the neural networks using a genetic algorithm to find the best weighting parameters and network topology. Based on the proposed seasonal self-evolving neural networks, we develop algorithms for both point forecasts and distributional forecasts. Experimental results, using the real wind generation data, demonstrate the significantly improved accuracy of the proposed forecast approach, compared with other forecast approaches.

Online Event Detection in Synchrophasor Data with Graph Signal Processing

Jie Shi, Brandon Foggo, Xianghao Kong, Yuanbin Cheng and Nanpeng Yu (University of California, Riverside, USA); Koji Yamashita (Michigan Technological University, USA)

1
Online detection of anomalies is crucial to enhancing the reliability and resiliency of power systems. We propose a novel data-driven online event detection algorithm with synchrophasor data using graph signal processing. In addition to being extremely scalable, our proposed algorithm can accurately capture and leverage the spatio-temporal correlations of the streaming PMU data. This paper also develops a general technique to decouple spatial and temporal correlations in multiple time series. Finally, we develop a unique framework to construct a weighted adjacency matrix and graph Laplacian for product graph. Case studies with real-world, large-scale synchrophasor data demonstrate the scalability and accuracy of our proposed event detection algorithm. Compared to the state-of-the-art benchmark, the proposed method not only achieves higher detection accuracy but also yields higher computational efficiency.

DeVLearn: A Deep Visual Learning Framework for Determining the Location of Temporary Faults in Power Systems

Shuchismita Biswas, Rounak Meyur and Virgilio Centeno (Virginia Tech, USA)

0
Frequently recurring transient faults in a transmission network may be indicative of impending permanent failures. Hence, determining their location is a critical task. Large scale deployment of Phasor Measurement Units (PMU) in modern power grids has given utilities access to precise measurements at a high temporal resolution that may be utilized for estimating fault location. This paper proposes a novel image embedding aided deep learning framework called DeVLearn for faulted line location using PMU measurements at generator buses. Inspired by breakthroughs in computer vision, DeVLearn represents measurements (one-dimensional time series data) as two-dimensional unthresholded Recurrent Plot (RP) images. These RP images preserve the temporal relationships present in the original time series and are used to train a deep Variational Auto-Encoder (VAE). The VAE learns the distribution of latent features in the images. Our results show that for faults on two distinct lines in the IEEE 68-bus network, DeVLearn is able to project PMU measurements into a two-dimensional space such that data for faults at different locations separate into well-defined clusters. This compressed representation may then be used with off-the-shelf classifiers for determining fault location. The efficacy of the proposed framework is demonstrated using local voltage magnitude measurements at two generator buses.

Frequency-Based Multi Task learning With Attention Mechanism for Fault Detection In Power Systems

Peyman Tehrani (University of California Irvine, USA); Marco Levorato (University of California, Irvine, USA)

1
The prompt and accurate detection of faults and abnormalities in electric transmission lines is a critical challenge in smart grid systems. Existing methods mostly rely on model-based approaches, which may not capture all the aspects of these complex temporal series. Recently, the availability of data sets collected using advanced metering devices, such as Micro-Phasor Measurement unit (\(\mu\)PMU), which provide measurements at microsecond timescale, boosted the development of data-driven methodologies. In this paper, we introduce a novel deep learning-based approach for fault detection and test it on a real data set, namely, the Kaggle platform for a partial discharge detection task. Our solution adopts a Long-Short Term Memory architecture with attention mechanism to extract time series features, and uses a 1D-Convolutional Neural Network structure to exploit frequency information of the signal for prediction. Additionally, we propose an unsupervised method to cluster signals based on their frequency components, and apply multi task learning on different clusters. The method we propose outperforms the winner solutions in the Kaggle competition and other state of the art methods in many performance metrics, and improves the interpretability of analysis.

Session Chair

Jinsub Kim (Oregon State University) <br> Zoom Room Host(s): Travis Hagan, Shashini Desilva (Oregon State)

Session IS-3

Invited 3: Electricity Markets

Conference
4:10 PM — 5:00 PM UTC
Local
Nov 12 Thu, 11:10 AM — 12:00 PM EST

Market Design for Prosumers on the Distribution Grid

Subhonmesh Bose (UIUC)

4
Integration of distributed energy resources in low and medium voltage distribution grids is on the rise. In this talk, I will present the design and analysis of a market mechanism that is suitable to harness these resources at scale. By market mechanism, I mean a bid/offer format, a market clearing mechanism, and a settlement scheme that accounts for the unbalanced multiphase nature of the distribution grid and the two-sided nature of the market (where participants can act as both demanders and suppliers). The mechanism combines a specially designed scalar-parameterized offer/bid format with a convex relaxation-based pricing scheme. Properties such as the ability to support competitive equilibrium, revenue adequacy, efficiency loss with strategic participants, etc. are analyzed.

A Flexibility Market-based Approach to Increase Penetration of Renewables in the Power Grid

Vijay Gupta (Notre Dame)

1
The traditional framework of the power grid is built around tailoring production from reliable generators to meet inflexible demand. However, renewable sources are intermittent and not well-forecasted at the time scales of interest to the existing energy markets. Thus, new market mechanisms are required to ensure reliable power supply in the presence of large scale renewables. We propose a framework for trading real options through which flexible sources are incentivized to mitigate the effect of renewable intermittence. We show that such options can increase renewable penetration while ensuring the delivery of reliable power and guaranteeing that no market participants are worse-off.

Session Chair

Junjie Qin (U. Berkeley) <br> Zoom Room Host(s): Austin Lasseter, Galpottage Senaratne (Oregon State)

Session S3

Security 3: Attack and Defence

Conference
4:10 PM — 5:00 PM UTC
Local
Nov 12 Thu, 11:10 AM — 12:00 PM EST

ED4GAP: Efficient Detection for GOOSE-Based Poisoning Attacks on IEC 61850 Substations (Best Student Paper Award Nominee)

Atul Bohara (University of Illinois at Urbana-Champaign, USA); Jordi Ros-Giralt (Reservoir Labs, Inc., USA); Ghada Elbez (Karlsruhe Institute of Technology (KIT), Germany); Alfonso Valdes (University of Illinois Urbana-Champaign, USA); Klara Nahrstedt (University of Illinois, USA); Bill Sanders (University of Illinois at Urbana-Champaign, USA)

1
Devices in IEC 61850 substations use the generic object-oriented substation events (GOOSE) protocol to exchange protection-related events. Because of its lack of authentication and encryption, GOOSE is vulnerable to man-in-the-middle attacks. An adversary with access to the substation network can inject carefully crafted messages to impact the grid's availability. One of the most common such attacks, GOOSE-based poisoning, modifies the StNum and SqNum fields in the protocol data unit to take over GOOSE publications. In this paper, we present ED4GAP, a network-level system for efficient detection of the poisoning attacks. We define a finite state machine model for network communication concerning the attacks. Guided by the model, ED4GAP analyzes network traffic out-of-band and detects attacks in real-time. We implement a prototype of the system and evaluate its detection accuracy. We provide a systematic approach to assessing bottlenecks, improving performance, and demonstrating that ED4GAP has low overhead and meets GOOSE's timing constraints.

Power Grid State Estimation under General Cyber-Physical Attacks (Best Paper Award Nominee)

Yudi Huang (Pennsylvania State University, USA); Ting He (Penn State University, USA); Nilanjan Chaudhuri (Pennsylvania State University, USA); Thomas La Porta (The Pennsylvania State University, USA)

0
Effective defense against cyber-physical attacks in power grid requires the capability of accurate damage assessment within the attacked area. While some solutions have been proposed to recover the phase angles and the breaker status of lines within the attacked area, existing solutions made the limiting assumption that the grid stays connected after the attack. To fill this gap, we study the problem of recovering the phase angles and the breaker status under a general cyber-physical attack that may partition the grid into islands. To this end, we (i) show that the existing solutions and recovery conditions still hold if the post-attack power injections in the attacked area are known, and (ii) propose a linear programming-based algorithm that can perfectly recover the breaker status under certain conditions even if the post-attack power injections are unknown. Our numerical evaluations based on the Polish power grid demonstrate that the proposed algorithm is highly accurate in localizing failed lines even though the conditions for perfect recovery can be hard to satisfy.

Network-aware Mitigation of Undetectable PMU Time Synchronization Attacks

Ezzeldin Shereen and György Dán (KTH Royal Institute of Technology, Sweden)

0
Time synchronization attacks are an emerging threat to many future smart grid applications, their mitigation is thus of utmost importance. In this paper we consider the problem of mitigating attacks that are undetectable by state-of-the-art power system state estimation, in precision time protocol networks. We formulate our problem as an integer linear program and show that it is NP-hard. We then provide a polynomial time approximation algorithm through a reduction from the group Steiner tree problem. We evaluate the performance of the proposed algorithm through extensive simulations compared to a greedy heuristic. Our results show that the approximation algorithm performs within a factor 1.8 of the optimal solution for synthetic topologies, while the greedy algorithm performs even better. On IEEE benchmark power systems the approximation algorithm performs within a factor 1.1 of the optimal solution, as good as the greedy heuristic.

Automatically Locating Mitigation Information for Security Vulnerabilities

Kylie L McClanahan (University of Arkansas at Fayetteville, USA); Qinghua Li (University of Arkansas, USA)

1
Software vulnerabilities pose significant security risks to systems. Usually patching can fix vulnerabilities, but patches are not always available, and in many cases patching is not preferred due to high overhead and potential service interruptions which is especially true for the electric industry. Then, other mitigation strategies are needed to mitigate security vulnerabilities.
Information about mitigation strategies can be difficult to find and is typically only reported on vendor or third-party websites. In the current practice, such information is manually located by security operators, which induces high delays and operation cost.
We consider this problem within the electric industry, which has particular importance and challenges because of its regulatory requirements.
We propose that providing electric utilities with automatically-located mitigation information will help them overcome the time burden and mitigate vulnerabilities more timely. In particular, we develop three methods for automatically retrieving mitigation information from vendor or third-party websites. Experiment results show high performance with all the three methods.

Session Chair

Inaki Esnaola (University of Sheffield, UK) <br> Zoom Room Host(s): Martiya Jahromi (Univ. of Toronto), Shammya Saha (ASU)

Session IS-4

Invited 4: Energy Storage and DER

Conference
5:10 PM — 6:00 PM UTC
Local
Nov 12 Thu, 12:10 PM — 1:00 PM EST

Mobile Storage for Demand Charge Reduction

Kameshwar Poolla, Junjie Qin (UC Berkeley)

0
EV batteries, an increasingly prominent type of energy resources, are largely underutilized. This paper proposes a new business model that monetizes underutilized EV batteries to significantly reduce the demand charge portion of many commercial and industrial electricity users’ electricity bills. This business requires minimal hardware to enable discharging batteries of electric vehicles and a sharing platform that matches EVs to commercial electricity users in real time. Using real meter data, we establish the financial viability of the business by studying the temporal distribution of user requests. We then discuss user-side and platform-side challenges for implementing this business.

Managing Distributed Energy Resources with Privacy and Fairness

Ram Rajagopal (Standford)

1
The grid edge is transforming rapidly, with the adoption of rooftop PV, electric vehicles, storage and smart loads. This is enabling consumers to potentially provide services to the grid ecosystem relying on aggregation services that coordinate a significant amount of resources to qualify for participation in demand-side management. Aggregators collect information about consumer resources in real-time. determine optimal strategies and send control signals to various resources. Typically, these decisions are made to maximize overall profits and reliability. This requires consumer data that can reveal private and sensitive preferences. Moreover, the computed control signals might demand significantly more from distributed energy resources that have higher availability or capacity that could be significantly correlated with protected consumer attributes. This naturally leads to the question: can we design DER management strategies that satisfy privacy and fairness constraints? In this talk, we review existing approaches, propose some novel ideas based on recent developments in privacy and fairness and identify opportunities for further research. 

Session Chair

Oliver Kosut (ASU <br> Zoom Room Host(s): Galpottage Senaratne, Austin Lasseter (Oregon State)

Session Tutorial-1

Stochastic Models and Optimization Techniques for Efficient Integration of Electric Vehicles in Smart Grids

Conference
5:10 PM — 5:40 PM UTC
Local
Nov 12 Thu, 12:10 PM — 12:40 PM EST

Stochastic Models and Optimization Techniques for Efficient Integration of Electric Vehicles in Smart Grids

Muhammad Ismail (TNTech), I Safak Bayram (U. Strathclide)

1
The past decade has witnessed a growing interest in electric vehicles (EVs) from both academia and industry. Such an interest is driven by the environmental and economic advantages brought by EVs. A recent study has revealed that the annual operation cost of an EV in the U.S. is $485 on average, while it is $1,117 for a gasoline-fueled vehicle, which represents 57% reduction in annual expenses. Furthermore, recent studies have demonstrated that EVs can significantly reduce the carbon dioxide emissions as they reduce the dependence on fossil fuel. Due to the aforementioned advantages, a recent report has shown that the number of EVs on the U.S. roads has increased over the past decade from a couple of thousands in 2011 to $1.2 million vehicles in 2019. A similar trend has been also observed worldwide. To accommodate the charging demands of such EVs, charging facilities have been deployed across the parking lots at residential and commercial units and at work places. Furthermore, fast charging stations have been allocated to serve EVs traveling on the roads. To cope up with the exponential increase in the number of EVs, additional measures have been adopted including temporal and spatial coordination of EV charging and discharging requests. In order to carry out the aforementioned planning and operational goals, advanced stochastic models and optimization techniques must be employed in order to: (a) model the stochastic nature of arrival and departure of EV charging requests, (b) model regular loads and generation units in the power grid to efficiently balance the total supply and demand, (c) allocate EV charging stations in the most economic manner while accounting for spatial and temporal increase of EV charging demands, and (c) coordinate the charging requests of parked and mobile EVs in the most satisfactory manner. This tutorial will equip the researchers with theoretical background of stochastic models and optimization techniques needed for efficient integration of EVs in smart grids. These tools include: Markov processes, queue models, stochastic geometry, mixed-integer programming, heuristic optimization, and game theory. The application of these tools to design planning and operation algorithms for EV integration in smart grids will be covered. This include optimal static and dynamic allocation of charging stations, optimal design of number of chargers and waiting space in charging station, and temporal and spatial coordination of charging requests from parked and mobile EVs in grid-to-vehicle (G2V), vehicle-to-grid (V2G), and vehicle-to-vehicle (V2V) scenarios. Furthermore, the tutorial will present datasets and simulators available for researchers and discuss their application scenarios.

Session Chair

Zoom Room Host(s): Shashini Desilva, Travis Hagan (Oregon State)

Session WAEG2

Workshop on Autonomous Energy Grid: A Distributed Optimization and Control Perspective 2

Conference
5:10 PM — 6:00 PM UTC
Local
Nov 12 Thu, 12:10 PM — 1:00 PM EST

Power System State Estimation Using Gauss-Newton Unrolled Neural Networks with Trainable Priors

Qiuling Yang (Beijing Institute of Technology, China); Alireza Sadeghi (University of Minnesota, USA); Gang Wang (Beijing Institute of Technology, China); Georgios B. Giannakis (University of Minnesota, USA); Jian Sun (Beijing Institute of Technology, China)

0
Power system state estimation (PSSE) aims at finding the voltage magnitudes and angles at all generation and load buses, using meter readings and other available information. PSSE is often formulated as a nonconvex and nonlinear least-squares (NLS) cost function, which is traditionally solved by the Gauss-Newton method. However, Gauss-Newton iterations for minimizing nonconvex problems are sensitive to the initialization, and they can diverge. In this context, we advocate a deep neural network (DNN) based ``trainable regularizer'' to incorporate prior information for accurate and reliable state estimation. The resulting regularized NLS does not admit a neat closed form solution. To handle this, a novel end-to-end DNN is constructed subsequently by unrolling a Gauss-Newton-type solver which alternates between least-squares loss and the regularization term. Our DNN architecture can further offer a suite of advantages, e.g., accommodating network topology via graph neural networks based prior. Numerical tests using real load data on the IEEE 118-bus benchmark system showcase the improved estimation performance of the proposed scheme compared with state-of-the-art alternatives. Interestingly, our results suggest that a simple feed forward network based prior implicitly exploits the topology information hidden in data.

Personalized Demand Response via Shape-Constrained Online Learning

Ana Ospina (University of Colorado Boulder, USA); Andrea Simonetto (IBM Research Ireland, Ireland); Emiliano Dall'Anese (University of Colorado Boulder, USA)

0
This paper formalizes a demand response task as an optimization problem featuring a known time-varying engineering cost and an unknown (dis)comfort function. Based on this model, this paper develops a feedback-based projected gradient method to solve the demand response problem in an online fashion, where: i) feedback from the user is leveraged to learn the (dis)comfort function concurrently with the execution of the algorithm; and, ii) measurements of electrical quantities are used to estimate the gradient of the known engineering cost. To learn the unknown function, a shape-constrained Gaussian Process is leveraged; this approach allows one to obtain an estimated function that is strongly convex and smooth. The performance of the online algorithm is analyzed by using metrics such as the tracking error and the dynamic regret. A numerical example is illustrated to corroborate the technical findings.

An Extensible Software and Communication Platform for Distributed Energy Resource Management

Gabe Fierro (University of California, Berkeley, USA); Keith Moffat (UC Berkeley, USA); Jasper Pakshong (University of California, Berkeley, USA); Alexandra von Meier (University of California, Berkeley & CIEE, USA)

0
This paper introduces a novel Distributed Extensible Grid Control (DEGC) software and communication platform to facilitate the implementation and deployment of distributed control strategies for distributed energy resources on electric grids. The communication platform provides security and extensibility by building on top of the WAVEMQ publish-subscribe message bus, which administers decentralized authorization and authentication. The software platform provides extensibility by providing appropriate abstractions and clearly defined APIs. We discuss general requirements for the platform, and investigate Volt-VAR voltage magnitude control and Phasor-Based Control as sample applications. We demonstrate the DEGC platform in hardware with the demanding Phasor-Based Control test case, and provide performance metrics.

Decentralized Frequency Control using Packet-based Energy Coordination

Hani Mavalizadeh , Luis Duffaut Espinosa and Mads Almassalkhi (University of Vermont, USA)

0
This paper presents a novel frequency-responsive control scheme for demand-side resources, such as electric water heaters. A frequency-dependent control law is designed to provide damping from distributed energy resources (DERs) in a fully decentralized fashion. This local control policy represents a frequency-dependent threshold for each DER that ensures that the aggregate response provides damping during frequency deviations. The proposed decentralized policy is based on an adaptation of a packet-based DER coordination scheme where each device send requests for energy access (also called an "energy packet") to an aggregator. The number of previously accepted active packets can then be used a-priori to form an online estimate of the aggregate damping capability of the DER fleet in a dynamic power system. A simple two-area power system is used to illustrate and validate performance of the decentralized control policy and the accuracy of the online damping estimating for a fleet of 400,000 DERs.

Multi-Level Optimal Power Flow Solver in Large Distribution Networks

Xinyang Zhou and Yue Chen (National Renewable Energy Laboratory, USA); Zhiyuan Liu (University of Colorado, Boulder, USA); Changhong Zhao (The Chinese University of Hong Kong, Hong Kong); Lijun Chen (University of Colorado at Boulder, USA)

0
Solving optimal power flow (OPF) problem for large distribution networks incurs high computational complexity. We consider a large multi-phase distribution networks of tree topology with deep penetration of active devices. We divide the network into collaborating areas featuring subtree topology and subareas featuring subsubtree topology. We design a multi-level implementation of the primal-dual gradient algorithm for solving the voltage regulation OPF problems while preserving nodal voltage information and topological information within areas and subareas. Numerical results on a 4,521-node system verifies that the proposed algorithm can significantly improve computational speed without compromising any optimality.

Session Chair

Guido Cavraro (NREL), Ahmed Zamzam (NREL) <br> Zoom Room Host(s): Nima Taghizhpourbazargani (ASU), Nathaniel Tucker (UC Riverside)

Session Tutorial-2

Power System Machine Learning Applications: From physics-informed learning for decision support to inference at the edge for control

Conference
5:40 PM — 6:10 PM UTC
Local
Nov 12 Thu, 12:40 PM — 1:10 PM EST

Power System Machine Learning Applications: From physics-informed learning for decision support to inference at the edge for control

Luigi Vanfretti (RPI), Tetiana Bogodorova (RPI)

1
Electrical power networks are facing unique challenges in their operation and control. With the increasing penetration of variable & intermittent renewable energy sources and limited transmission capabilities, grid operations and control is becoming evermore complex. However, the transition into the “digital utility” is brining unprecedented opportunities to leverage measurement data with conventional analysis methods, that when combined together, can help in achieving the goals for a “green” energy transition. In this context, Artificial Intelligence and Machine Learning are emerging as a cohort of theory, methods and technologies that if applied properly to solve power system problems, may have an invaluable contributions to solve existing and future grid challenges. This tutorial provides insights from a team of power system specialists on the development of Machine Learning-based for power system applications using both measurements and physics-informed simulation. The scope of the presentation is on how to frame to power system problems and apply ML existing methods and technologies, and not on ML itself. The tutorial is divided in three parts. First, an overview on today’s hierarchical power system operation and control is given, identifying a few of the potential areas where ML can be be of substantial benefit to power system operations for decision making at the control center to inference at edge devices in control/protection. Second, on-going research in the development of a “recommender system” for operator decision support will be presented. Such type of predictive application cannot rely on measurement data alone, it has to be complemented with physics-informed models and simulation. In other words, this is a case where both measurement-based and simulation-based ML analytics need to be combined. Hence, this part of the presentation makes a strong emphasis on the careful design of simulation models, algorithms for automated simulation scenario design and software pipelines for automated generation of simulation data. Many challenges were faced when building a toolchain that makes this possible. We illustrate the challenges faced to adopt not only ML methods, but the computing software environments and hardware required in ML workflows so to be able to scale for realistic use cases. Finally, we illustrate the first results obtained using our proposed approach for classification of power system stability using both traditional data science methods and Deep Learning. Finally, on-going research in the development of “edge” applications in power systems will be presented. The use case is the detection of undesirable sub-synchronous control interactions between the power grid and wind turbines for potential use in control and protection at the “edge”, i.e at the wind-farm location, which would require a ML-based apparatus capable to provide reliable predictions in real-time. We illustrate the challenge of having a reduced measurement data set to train such detection algorithm, and how simulation helps to solve this problem. Furthermore, we illustrate the performance of the developed ML-based solution on three different hardware platforms.

Session Chair

Zoom Room Host(s): Arka Sanka (UT Austin), Manish Singh (Virginia Tech)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.