April 2, 2025

Solving DCOPF Optimization Problem on AWS EC2 Instances

This series of articles explore solutions to the DCOPF (Direct Current Optimal Power Flow) optimization problem using various hardware configurations. We will focus on solving the approximate Linear version of the problem using solvers the open-source GLPK and the commercial FICO.

The first article will focus on CPU configurations. We will utilize AWS EC2 instances with different configurations, primarily varying the number of cores and the amount of RAM allocated to the instances.

Amazon Web Services (AWS) offers a variety of Elastic Compute Cloud (EC2) instances to suit different workload requirements. Some of the common instance types include:

  • General Purpose Instances: These instances offer a balance of compute, memory, and networking resources, making them suitable for a wide range of applications, such as web servers and code repositories. Examples include T2 and M5 instances.
  • Compute Optimized Instances: These instances are designed for compute-intensive tasks that require high performance processors. They are well-suited for applications such as batch processing and media transcoding. Examples include C4 and C5 instances.
  • Memory Optimized Instances: These instances are built for workloads that require large amounts of memory, such as in-memory databases and real-time big data analytics. Examples include R4 and R5 instances.

Experiment Setup and Results

Our research began using T2 instances, and we solved the configurations below for both GLPK and FICO. We observed that while doubling the number of cores or RAM initially decreased processing time, this improvement ceased beyond 4 cores and 16GB RAM. Therefore, we found that 4 cores and 16GB RAM was the optimal configuration for these problems.

Configuration FICO GLPK
t2.large (2 vCPUs, 8GB RAM) 7363 12261
t2.xlarge (4 vCPUs, 16GB RAM) 5945 9719
t2.2xlarge (8 vCPUs, 32GB RAM) 5817 10503

To determine if the problem is compute or resource intensive, we will test two different EC2 instances from group C (compute-intensive) and group R (resource-intensive) and note the observations.

Configuration FICO GLPK
c5.xlarge (4 vCPUs, 16 GB RAM) 5161 9013
r5.xlarge (4 vCPUs, 16 GB RAM) 6146 9514

The results indicate that C instances solved the problem faster than T and R instances with the same configuration, which suggests that the problem is compute intensive. 

To further verify this, we ran the process on a T2 instance with 4 cores, but limited the RAM to 8GB to confirm that memory was not a factor. The results are below.

Configuration FICO GLPK
t2.xlarge (4 vCPUs, 8 GB RAM) 5894 10094
t2.xlarge (4 vCPUs, 16GB RAM) 5945 9719

The data above indicates that the issue does not get resolved more quickly simply by increasing RAM. This suggests that the problem is not memory intensive.

Conclusion

The problem is computationally intensive, not memory intensive. We found that the problem scales up to a limit with the addition of computing resources.

Note for Second Post

In the second post of this series, we will explore whether solving the DCOPF optimization problem runs faster on GPU-enabled EC2 instances. We will compare the performance of GPU instances with CPU-only instances and discuss the factors that affect the performance gains if any.

READ MORE

October 8, 2025

Why the Future of High-Performance Computing Is Analog

Meghesh Saini
AI and high-performance computing are hitting an energy wall as digital architectures consume unsustainable power for data movement and training. Analog computing offers a breakthrough - processing information through continuous physical dynamics instead of binary logic, enabling up to 100× energy efficiency and real-time performance. By computing with physics itself, analog systems redefine performance metrics and pave the path toward sustainable, high-efficiency intelligence.
October 1, 2025

Best AI Chips 2025: Compare GPU, TPU, FPGA, ASIC, and Analog

Meghesh Saini
Discover the ultimate guide to AI chips in 2025, comparing CPUs, GPUs, TPUs, FPGAs, ASICs, and analog processors. Learn how to boost processing performance, reduce energy costs, and choose the best chip for your AI workloads. Explore cost ranges, efficiency, and real-world applications, and understand why selecting the right AI processor can accelerate training, inference, and large-scale machine learning projects. Perfect for entrepreneurs and tech enthusiasts.
September 26, 2025

The Hidden Backbone of the Power Grid: Understanding ACOPF

Meghesh Saini
Alternating Current Optimal Power Flow (ACOPF) is a critical tool for managing electricity grids efficiently, balancing generation, transmission, and demand while minimizing costs and emissions. Beyond technical optimization, it drives business value by reducing losses, lowering energy costs, and enhancing reliability. As grids integrate renewables and face growing demand, ACOPF solutions, including AI-driven and high-performance computing approaches are essential for utilities, industrial users, and policymakers seeking resilient, sustainable, and profitable energy systems.