Understanding SSAS memory settings with real examples

If you have worked with SSAS and been responsible for if from the server side, you are most likely familiar with its memory settings. In an ideal world, your server has plenty of physical memory, so you don’t have to worry about these settings. In a real world though, you may have one or more of the following at the same time: Not enough memory on your server, bad database (model) designs that do not follow best practices hence they take more memory than they should (split those datetime columns!), heavy queries (I’ll have one example to show you later), and a large concurrent user volume which you have no control over. In these cases, you can encounter memory pressure on the sever and it is beneficial to understand how to interpret these memory settings.

In this post, I am going to run several experiments and change the default SSAS memory settings, run queries/process models, use PerfMon to collect statistics from the server,  and then graph the results in Excel. For each experiment, I will show you one or two snapshots from Perfmon and the recorded stats in Excel. You can find the corresponding Excel file a the end of each experiment. 

SSAS memory settings

You can see SSAS memory settings by using SQL Sever Management Studio and getting the server properties. The following are the default settings. 

For all the Limits, if the number is under 100, it is a percentage of physical memory on the server (RAM). When HardMemoryLimit is set to 0 its value is half way between TotalMemoryLimit and total physical memory on the sever. You can find a full description of these settings here

The easy to understand memory settings

The Low, Total, and Hard limits have always been easy to grasp specially if you are working with SSAS Multidimensional (yes, they still do exist even if only in spirit for some 🙂 ).

If you were to graph these limits along with memory consumption, you would see something like this. We will see a real example of this later.

If memory consumption is below the Low limit everything is fine and it is free to stay in memory. Once the consumption passes the Low limit a cleaner thread wakes up and tries to clean up memory. At this point price of memory is no longer zero. It starts from 2 at the Low limit and goes as high as 1000 when memory consumption reaches the Total limit. The higher the memory pressure the more aggressive the cleaner gets. Once memory consumption reaches the Hard limit all connections/sessions are closed and queries are cancelled with an out of memory error.

The harder to understand memory settings

SSAS Tabular introduced two more settings.

VertiPaqPagingPolicy

When it is set to 0, VertiPaq data structures must fit in physical memory and cannot be paged.  In this setting, the VertiPaqMemory limit does not play any roles.

When it is set to 1 (default value), VertiPaq data structures can grow past physical memory and as much as windows page file allows them. In this mode, the VertiPaqMemoryLimit comes into play. All other limits also still exist.

VertiPaqMomoryLimit

When paging is allowed, this is the limit of the VertiPaq memory that is considered for calculating the current price of memory by the cleaner thread. Any VertiPaq memory in addition to this limit is not visible to the cleaner.

Perfmon

You can use Windows Performance Monitor (Perfmon) to get a lot of information about SSAS. Specifically, the limits can be seen with Perfmon. 

This is what they look like on my server with default settings. This server has 16GB of memory.

In addition to the limits, you have a few more useful counters:

Memory Usage KB

It shows the amount of memory visible to the cleaner thread for calculating the price of memory. Note that while this can closely follow the total amount of memory that SSAS is using, it is not always the same or even close to it (I will show you an example of this later). This is just what’s visible to the cleaner. Also this is not how much the cleaner can clean if it decides to do so, this is what is uses to set memory price.

Process(msmdsrv) – Private Bytes

It shows the amount of virtual memory given to SSAS by OS. It is up to the OS to decide how much of this is physical memory (RAM) vs. paging file (disk). This counter matches what you see in Task Manager/Resource Monitor under Commit (KB) column.

When you have plenty of memory on a server, you will see this counter and Memory Usage KB, follow each other very closely to the point that if you graph them they appear to fall on the same line. In most of the following examples, the graphs will show this. There are two examples where this is not the case.

Note: In the following graphs I refer to Process(msmdsrv)\Private Kbytes which is a column I add to the Excel file that Perfmon writes to. I do this so that I can graph it along other counters which are all in KB.

Test model

I wanted to work with something larger than AdventureWorks so I downloaded some public data from City of Chicago Service requests data sets from Kaggle. I imported the spreadsheets directly into a Tabular model on my SSAS server. intentionally did not do any cleaning/modeling to make them take as much memory as possible. The resulting model has 5 tables that are not related to each other. Each row in each table represents a service request and has a unique service request number which is exactly what I was looking for: High cardinality –> Large model

Here are the tables and their row counts:

TableRow Count
Graffiti1,027,859
Pot Holes551,603
Street Lights470,288
Garbage Carts392,932
Rodent Baiting       308,632

The model takes about 280 MB of memory. 

Test Query 

Since I don’t have a live query volume, I created an intentionally nonsensical resource intensive query that requires some data spooling in memory by the formula engine. I use this query to push the memory consumption up past all the limits and see how different counters behave.

Here is the link to the gist: code.

DO NOT RUN A QUERY LIKE THIS QUERY IN PRODUCTION!

This query tries to cross join the Service Request Number from all of the above tables (1 million by 500 thousands by 400 thousands by 300 thousands by 300 thousands) which amounts to 3.233571077757377e+28 rows, then find the distinct combinations of all results (which we know is the row count anyways) and finally do a row count on them (redundant but intentional). I got the idea of this query from chapter 14 from the Tabular Modeling in Microsoft Analysis Services” book.

Experiments

Experiment 0: Test query with default limits

Memory settings are still set to their default values as seen previously. The query ran for 14 minutes and 40 seconds before it ate up all the memory SSAS can provide and was killed with the following message:

There’s not enough memory to complete this operation. Please try again later when there may be more memory available.

The following picture shows the details as captured by Perfmon. The primary Y axis shows memory limits/consumption KB. The secondary Y axis shows memory price. The lines for Memory Usage KB and Process(msmdsrv)\Private KBytes fall on top of each other and look like one line. You can see how memory consumption goes up (along with memory price) until the query is killed by SSAS.

Of course this is an extreme example that is convenient for me to use in place of 1000 users running small queries at the same time. You can see a behavior like this on a server with a large user counts and not enough memory.

The following snapshot is from Perfmon a few seconds before the query got the out of memory error. Notice price of memory is at 1000. Memory Usage KB is past the High limit but has not exceeded the Hard limit yet.

Also notice that the VertiPaq memory (VertiPaq Nonpaged KB + VertiPaq Pages KB) did not increase since I only ran a query (vs processing the model). 

You can see the full details here:

Experiment 1: Test query with lower limits

Similar to experiment 0 but this time the query is killed sooner, in 22 seconds as expected.

Memory hits the limits sooner than previous experiment.

Similar to previous experiment, the VertiPaq data does not rise and Memory Usage KB and Process (msmdsrv) Private Bytes fall on the same line.

You can see the full details here:

Experiment 2: Processing a model with VertiPaqMemoryLimit = 0

At this point you are probably thinking that you can safely monitor Cleaner Current Price to identify memory pressure. I am going to show you that, this does not always hold. This is where Memory Usage KB and Process (mdmdsrv) Private Bytes are going to deviate from each other.

The Test database is only 280MB. I have two copies of it plus AdventureWorks on the server. The machine has 16GB of memory. With these settings, a process operation is not going to increase the VertiPaq memory usage high enough to see anything in the graphs. In order to push the VertiPaq memory usage higher than the limits, I drop the memory limits! This is just for simulation. You should NOT try this on a production server. 

In most cases the default settings are sufficient. In some cases, you should consider changing these settings, for example if you have other memory intensive processes running on the same machine (think SQL Server).

Here are the new reduced settings:

which amount to the following values as seen in Perfmon:

Take a look at the VertiPaq memory usage which is the sum of VertiPaq Nonpaged KB + VertiPaq Paged KB = 289,272 KB. 

I have set Memory Limit VertiPaq KB to zero which SSAS sets it to a minimum of  8,192 KB that is much smaller than the current VertiPaq memory usage. Obviously this is not the limit of how much VertiPaq data can grow.  By setting the limit to 8,192, I have essentially told the cleaner thread to only include 8,192 KB of VertiPaq data for calculating the price of memory and ignore the rest. 

Now if I issue a process full command on the test model, the amount of VertiPaq data usage will temporarily increase. This is a screenshot of Perfmon during the processing:

Note that Memory Usage KB increased by 207,164 KB while VertiPaq data usage increased to 739,676 KB which is 450,404 KB more than before. In other words, not all of VertiPaq data usage is being reported in Memory Usage KB. 

Memory Usage KB did not go over Memory Limit Low KB and the Price of memory stayed zero the entire time. Of course, as the following picture shows, Process (msmdsrv) Private Bytes (SSAS  memory) went up and beyond the High limit. If this Hight limit reflected the actual total physical memory on your server,  you would see heavy paging, performance would suffer, and the service/server might crash.


Processing finished in 59 seconds.

In this graph, you can see one separate line for each of Memory Usage KB (blue line) and Process(msmdsrv)\Private KBytes (purple line). The purple line goes well beyond the Total Limit but the cleaner is doing exactly as it was told which is to ignore the VertiPaq data.

You can see the full details here:

Experiment 3: Processing a model with VertiPaqMemoryLimit = 2

This time I am going to increase the VertiPaqMemoryLimit by a small amount. Here are the limits:

which amount to the following numbers:

VertiPaq memory usage is 289,272 KB same as before.

During the process, the VertiPaq memory usage increases. The following screenshot from Perfmon is taken half way through the process.

At this point, VertiPaq memory usage is up to 603,656 KB. This time, the price of memory has increased because Memory Usage KB is including up to 332,643 KB of VertiPaq data and that is enough to push it past the Low Limit and soon the High limit.

Processing finished in 85 seconds.

In this graph, you can see that price of memory increases during the processing operation. You still see two separate lines for Memory Usage KB and Process(msmdsrv)\Private Bytes. If the Total limit were actually based on %90 of the total physical memory on the server, you would see heavy paging and poor performance. 

 You can see the full details here:

Experiment 4: Processing a model with
VertiPaqMemoryLimit = 8

This time I increased VertiPaqMemoryLimit to 8.


which amounts to the following values in Perfmon:

VertiPaqMemoryLimit is about 1GB more than previous experiment. The following screenshot was taken during processing of the model.


Processing finished in 61 seconds.

You can see full detail here:

Experiment 5: Processing a model with default settings

This time I am going to set the memory settings back to their default values.

which amount to the following values in Perfmon:

Notice how VertiPaq memory is split between VertiPaq Paged KB and VertiPaq Nonpaged KB. The memory in VertiPaq Paged KB is not necessarily paged to disk. This memory can be paged to disk. The memory in VertiPaq Nonpaged KB is locked in Physical memory. 

Processing the model with these settings does not put any pressure on the server as expected. The following is a screenshot from Perfmon during processing of the test model.

Notice how both VertiPaq Paged KB and VertiPaq Nonpaged KB values increase during the process. This is in contrast with next experiment where all the increase will be shown in VertiPaq Nonpaged  KB.

You can see full details here:

Experiment 6: Processing a model with VertiPaqPagingPolicy = 0

Finally I want to test what happens with VertiPaqPagingPolicy set to zero. In particular I am interested to see if VertiPaq memory can grow more than VertiPaqMemoryLimit during the processing period. Based on current definition of these settings that should be the case, however; I had initially thought that based on the name when VertiPaqPagingPolicy is set to 0, the VertiPaqPagingMemoryLimit becomes the limit of how much VertiPaq memory can grow.

With all settings at their default values, this is how the counters look like in Perfmon.


It is interesting that all of VertiPaq memory shows under VertiPaq Nonpaged KB this time. 

Here are the results when I process the model:

This time, Memory Usage KB (and process(msmdsrv)\private KBytes) increased as expected during a processing operation and total VertiPaq memory increased past VertiPaqMemoryLimit. This confirms that VertiPaqMemoryLimit is not the limit of how much VertiPaqMemoryLimit can grow (at least during processing time)!

Note: Event though VertiPaqPagingPolicy is not exposed through Perfmon, you can infer its value based on VertiPaq Paged KB and VertiPaq Nonpaged KB values.  VertiPaq Paged KB of zero means all VertiPaq data is locked in memory and cannot be paged which means the VertiPaqPagingPolicy is set to zero.

You can see full details here:

Conclusion

Since the Cleaner Current Price has a value between 0 and 1000 regardless of other memory settings , this counter is my favorite memory counter to look at on a any server. In most cases, if the value stays under 10, memory usage is in a good state. Any values over 10 for a continuous time deserves more investigation using all the other counters I mentioned in this post. A value higher than 10 and close to 1000 definitely shows memory pressure.

Having said this, you have to keep in mind that as I showed in this post, Cleaner Current Price is based on what the cleaner thread sees and not always the total amount of memory that SSAS is using. The cleaner does not see any VertiPaq memory in excess in of VertiPaqMemoryLimit. If Cleaner Current Price stays low and you are still having performance issues, you should get a full picture of memory usage by monitoring the following counters at the minimum:

Category Counter Name
MSAS14 *Cleaner Current Price
MSAS14 Memory Limit Hard KB
MSAS14 Memory Limit High KB
MSAS14 Memory Limit Low KB
MSAS14 Memory Limit VertiPaq KB
MSAS14 Memory Usage KB
MSAS14 VertiPaq Nonpaged KB
MSAS14 VertiPaq Paged KB
Process (msmdsrv) **Private Bytes***
MemoryAvailable KBytes
MemoryPage Faults/sec

* The counters for the default instance of SSAS show under MSASxx categories, where xx is a number corresponding to the version of SSAS you are using. I used SSAS 2017 so the counters were under MSAS14. For SSAS 2019, the counters will be under MSAS15. For SSAS 2016, the counters will be under MSAS13.

** The Process category can be filtered to a particular process. To filter Process information to SSAS, choose msmdsrv from its subcategory. 

*** In the Excel files I shared, I have added a column that converts Process(msmdsrv)\Private Bytes to Kilo Bytes so that I can graph it along other counters.

Finally

If you want to learn how to use Perfmon to add the SSAS counters I used in this post, watch this recording of my session. Half way through the video, I show where you can find the counters and how to easily save them to Excel/SQL Server.

View this gist on GitHub

3 thoughts on “Understanding SSAS memory settings with real examples

Leave a Reply