By Nadeem Akhter


Recently I had a chance to work on a system, which literally had millions of calls to the database. I wanted to know which queries were being called most frequently and which ones were taking most of the CPU cycles. I tried using SQL profiler but it made the situation worse due to its own overhead. I needed to figure out how to do this without further taxing the system.

Another issue I uncovered was every time I wanted to profile something I had to go through the same manual process over and over again i.e. pick certain events, choose few columns and apply a couple of filters, and that is a painful of doing monotonous tasks.


To solve this problem, I created a lightweight utility, which just subscribes to SQL profiler events and groups them based on the objects. Then it shows statistics such as the name of the object, how many times it was called, and the total duration of all the calls. Once I have this information, I can quickly sort it into different ways and I would know, if I am using dynamic queries, which statements/procedures are taking most of the CPU cycles. These would be the ones that need my attention!

Here is a screenshot:

I mentioned that there could be millions of calls, and you don’t want to capture each and every one of them. Otherwise there will be too much data and it will be difficult to analyze too. What I have done is just capture only top twenty most time consuming queries for each object. This way you can easily analyze them find and fix the issue one at a time and repeat the process.

Here is the updated screenshot of the utility. You can export/import the data and increase/decrease the number of rows you want to show.

What’s Next?

I hope you like the blog. If you need assistance on a similar issue, comment below and I’ll be happy to help. Next time we will see what are some common mistakes and some easy ways to fix those big problems, so stay tuned!

Read more about SQL here!