Understanding Microsoft Fabric Capacity: Maximise Compute Power, Minimise Cost

When it comes to Microsoft Fabric, understanding Fabric Capacity is essential to unlocking performance while keeping costs under control. With capacities ranging from F2 to F2048, and workloads behaving differently across Lakehouse and Data Warehouse environments, choosing the right capacity isn’t always straightforward.

In this blog, we’ll explore how Microsoft Fabric Capacity works, how to determine the right size of your environment, and how features like bursting and smoothing can help you achieve better value from your investment. Microsoft Fabric comes with a wide range of features that can help your organisation manage how data is stored and accessed, including capabilities like Fabric databases.

What is Microsoft Fabric Capacity?

Fabric Capacity is determined by the Stock Keeping Unit (SKU) you purchase. Each SKU grants you a fixed number of Capacity Units (CUs), which act as a dedicated pool of compute resources. These units power everything from queries to data transformations, and they are shared dynamically across workloads.

Instead of assigning fixed resources to specific tasks, Microsoft Fabric allows different workloads to draw from the same pool of compute power. This flexibility in resource allocation ensures your jobs get the performance they need, when they need it.

Does Capacity Mean Fixed Compute Power?

Buying a specific capacity doesn’t mean you’re capped at that level of performance. Traditionally, large batch jobs could consume all the available compute and negatively affect performance for user queries. Microsoft Fabric addresses this issue through a feature known as bursting.

What is Bursting in Microsoft Fabric?

Bursting allows workloads to temporarily access additional compute power beyond your reserved capacity. For example, if you have an F64 capacity, your workload may normally complete in 60 seconds using the standard 64 CUs. With bursting, that same workload might use 256 CUs temporarily and complete in just 15 seconds. This is especially valuable for data-heavy jobs that could otherwise take hours to run.

Utilising compute resources beyond the reserved capacity is bound to increase costs and that’s where smoothing comes in.

Smoothing in Microsoft Fabric

Smoothing spreads your compute usage over a 24-hour period, helping you manage capacity more effectively.

Most workloads in Fabric are scheduled or background tasks, which makes them ideal for smoothing. This process allows you to size your capacity based on average usage rather than peak usage. As a result, you’re only charged extra if your average usage exceeds your reserved capacity, not for occasional spikes.

The Technical Bits: Burst Factor by Workload

Bursting isn’t unlimited, and the limits vary depending on whether you’re using a Data Warehouse or Lakehouse environment.

For Fabric Data Warehouses:

  • F2 can burst up to 32x
  • F4 can burst up to 16x
  • F8 and higher can burst up to 12x

This means that while smaller capacities can burst to higher multiples, they still have logical limits. For example, an F2 cannot efficiently handle massive ingestion jobs in a short period of time, even with bursting.

For Fabric Lakehouses:

  • Every CU is equal to 2 vCores
  • The burst factor is capped at 3x the available vCores
  • An F64 capacity provides 128 vCores, which can burst to 384 across 3 nodes

Lakehouse bursting is designed more for managing concurrency than single-job performance. It enables multiple workloads to run simultaneously, without bottlenecks.

Cost risks involved

Bursting and smoothing both help optimise performance and cost, but they also introduce potential risks. If your average compute usage exceeds your reserved capacity, your organisation will begin paying for excess compute on a pay-as-you-go basis. These rates are higher than your reserved capacity pricing, which can lead to unexpected expenses if left unmonitored.

Final Thoughts

Choosing the right capacity is about more than just meeting today’s needs. Done well, it allows an organisation to:

  • Improve performance across workloads
  • Control and reduce infrastructure costs
  • Scale efficiently with changing data demands
  • Deliver faster insights and reporting

Additionally, there are use cases where reserving a larger capacity brings added benefits. For example, if your organisation wants to provide unlimited Power BI Pro access, you must reserve at least an F64 capacity.

Fabric Capacity gives you the tools to right-size your compute environment, scale dynamically, and reduce costs over time. But the technical details around capacity units, bursting rules, and usage smoothing require a thoughtful approach. We also covered everything new with Microsoft Fabric in more depth in our FabCon blog series, including real-world insights and new features.

How Simpson Associates Can Help You

Simpson Associates are a Fabric Featured partner with certified expertise in Microsoft Fabric. Working with a partner who understands the nuances of Microsoft Fabric can help you avoid overprovisioning, underperformance, and unexpected costs, while making the most of what this powerful platform has to offer. Find out more about our Fabric Accelerator.

Discover how our skills and specialisations can help you on your data journey. Contact us today for a consultation or reach out to us via our live chat with any questions.

Blog Authors:

Jonathan Oswald, Senior Consultant at Simpson Associates.

Charlotte Hughes, Consultant at Simpson Associates.