Recently we had the need to analyse the queries made by the users on Azure Analysis Services and to cross reference that data with Azure AS metrics. For example to see exactly which queries are the cause for high QPU’s or Memory and see who made them on which application.
Currently Azure AS allows you to configure an Extended Events session to collect events from your Analysis Services database:
But there’s no easy way to export or save that data to do some further analysis. You can only watch live data and it’s not very user friendly:
We tried to use the good old ASTrace but it’s not compatible with Azure Analysis Services and it’s not a very good practice because it basicaly create a Profiler Session that will be deprecated soon.
Because we desperately needed to analyse the user queries to identify bottlenecks my amazing BI team at DevScope build an great tool called “Azure-As-Trace” that will allow you to point to a Analysis Services Instance and instantly start collecting the events you want and store them in the file system in JSONL format.
You can download it or contribute to it at github: https://github.com/DevScope/Azure-AS-Tracer
It’s very simple to use you just need to download the binaries and change in the config file ‘AzureASTrace.exe.config’ the following parameters:
||The connection string to the Analysis Services instance you want to monitor
||The path to the XEvents trace template to create the monitoring session on the Analysis Services Instance
||The path to the Output Folder that will hold the JSONL files
After that you have two options:
- Run AzureASTracer as a console application, by simply executing AzureASTrace.exe
- Run AzureASTracer as a windows service by running ‘setup.install.bat’ and start the service
Either way when running the events will be saved on this on the Output folder, AzureASTrace will create a file for every Event Type subscribed and group the files by day:
Now you can analyze those events in Power BI (comming soon) very easily…
One of the big advantages of Azure Analysis Services is the ability to pause/resume and scale up/down as needed, this will allow you to pay only for what you use and greatly reduce costs.
Azure Analysis Services team released a PowerShell module “AzureRM.AnalysisServices” with cmdlets to manage your Azure Analysis Services resources and they could be more easy to use:
- Get-AzureRmAnalysisServicesServer – To get your server metadata and current status
- Suspend-AzureRmAnalysisServicesServer – To suspend a server
- Resume-AzureRmAnalysisServicesServer – To Resume a server
- Set-AzureRmAnalysisServicesServer – To update a server, ex: change the SKU
More details here.
But this is only effective if we somehow automate this operation, it’s not feasible if someone on the team or customer is actively pausing/resuming or scaling up/down the instance
With that in mind we build a very simple PowerShell script where you configure in which time and days the Azure AS should be on and on which SKU.
Download the full script here.
The script is configured by a JSON metadata:
The above metadata will configure Azure AS to:
||8 AM to 18 PM (peak hours)
||18 PM to 00 AM (off peak)
||8 AM to 00 AM
The powershell script has the following parameters:
||The name of the Azure Resource Group your Azure AS server is deployed:
||The name of the Azure AS Server:
||The JSON metadata config string
|The path to an Azure profile stored locally using the “Save-AzureRmContext” cmdlet.
This is useful to test the script locally.
|The name of Azure Connection if you want to run this script in a Azure Automation RunBook:
Probably most of you will want to run this script on an PowerShell Runbook in Azure Automation, learn how to setup the Azure Automation here.
Next you will need to register the module “AzureRM.AnalysisServices” as an Asset of your automation account:
After that just create a new PowerShell runbook:
Paste the script and remember to set the parameter -azureRunAsConnectionName:
Publish the script and create a schedule:
That’s it! You know have your Azure AS automatically pausing/resuming and scalling up/down using a configuration file you defined.
Now just lay back and measure the savings at the end of the month!