April 2, 2025

Solving DCOPF Optimization Problem on AWS EC2 Instances

This series of articles explore solutions to the DCOPF (Direct Current Optimal Power Flow) optimization problem using various hardware configurations. We will focus on solving the approximate Linear version of the problem using solvers the open-source GLPK and the commercial FICO.

The first article will focus on CPU configurations. We will utilize AWS EC2 instances with different configurations, primarily varying the number of cores and the amount of RAM allocated to the instances.

Amazon Web Services (AWS) offers a variety of Elastic Compute Cloud (EC2) instances to suit different workload requirements. Some of the common instance types include:

  • General Purpose Instances: These instances offer a balance of compute, memory, and networking resources, making them suitable for a wide range of applications, such as web servers and code repositories. Examples include T2 and M5 instances.
  • Compute Optimized Instances: These instances are designed for compute-intensive tasks that require high performance processors. They are well-suited for applications such as batch processing and media transcoding. Examples include C4 and C5 instances.
  • Memory Optimized Instances: These instances are built for workloads that require large amounts of memory, such as in-memory databases and real-time big data analytics. Examples include R4 and R5 instances.

Experiment Setup and Results

Our research began using T2 instances, and we solved the configurations below for both GLPK and FICO. We observed that while doubling the number of cores or RAM initially decreased processing time, this improvement ceased beyond 4 cores and 16GB RAM. Therefore, we found that 4 cores and 16GB RAM was the optimal configuration for these problems.

Configuration FICO GLPK
t2.large (2 vCPUs, 8GB RAM) 7363 12261
t2.xlarge (4 vCPUs, 16GB RAM) 5945 9719
t2.2xlarge (8 vCPUs, 32GB RAM) 5817 10503

To determine if the problem is compute or resource intensive, we will test two different EC2 instances from group C (compute-intensive) and group R (resource-intensive) and note the observations.

Configuration FICO GLPK
c5.xlarge (4 vCPUs, 16 GB RAM) 5161 9013
r5.xlarge (4 vCPUs, 16 GB RAM) 6146 9514

The results indicate that C instances solved the problem faster than T and R instances with the same configuration, which suggests that the problem is compute intensive. 

To further verify this, we ran the process on a T2 instance with 4 cores, but limited the RAM to 8GB to confirm that memory was not a factor. The results are below.

Configuration FICO GLPK
t2.xlarge (4 vCPUs, 8 GB RAM) 5894 10094
t2.xlarge (4 vCPUs, 16GB RAM) 5945 9719

The data above indicates that the issue does not get resolved more quickly simply by increasing RAM. This suggests that the problem is not memory intensive.

Conclusion

The problem is computationally intensive, not memory intensive. We found that the problem scales up to a limit with the addition of computing resources.

Note for Second Post

In the second post of this series, we will explore whether solving the DCOPF optimization problem runs faster on GPU-enabled EC2 instances. We will compare the performance of GPU instances with CPU-only instances and discuss the factors that affect the performance gains if any.

READ MORE

September 26, 2025

The Hidden Backbone of the Power Grid: Understanding ACOPF

Meghesh Saini
Alternating Current Optimal Power Flow (ACOPF) is a critical tool for managing electricity grids efficiently, balancing generation, transmission, and demand while minimizing costs and emissions. Beyond technical optimization, it drives business value by reducing losses, lowering energy costs, and enhancing reliability. As grids integrate renewables and face growing demand, ACOPF solutions, including AI-driven and high-performance computing approaches are essential for utilities, industrial users, and policymakers seeking resilient, sustainable, and profitable energy systems.
September 24, 2025

Analog Intelligence for the Automotive Revolution

Meghesh Saini
The automotive industry’s shift to electrification, autonomy, and software-defined vehicles demands faster, more efficient computing. Analog compute addresses power, latency, and bandwidth challenges by processing signals near the source, reducing ECU load and cost. Real-world deployments from Tesla, Toyota, Volkswagen, and Waymo show gains in battery life, motor control, and safety. Emerging in-memory and in-sensor compute promise even greater efficiency. In this blog, we talk about how Analog computing is opening new frontiers in automobiles.
September 19, 2025

Analog accelerators powering the Future of Robotics

Meghesh Saini
Analog computing is revolutionizing robotics and manufacturing by delivering real-time decision-making, 10x–100x faster processing, and up to 90% energy savings. It enables higher throughput, reduced defects, and edge-based AI control without heavy cloud dependence. By accelerating Physical AI, analog computing drives safer, greener, and more productive factories. Business leaders should explore pilot programs now to gain a competitive edge and lead the next wave of Industry 4.0 transformation.