Favorite DMVs: sys.dm_exec_sessions

Here’s a link to the Microsoft documentation on this DMV.

This is part 1 in a series I’m writing about my favorite Dynamic Management Views in SQL Server. I’m going to talk about how they’re useful, and then build on each post in the series by showing they’re connected.

Why it’s useful

Well, as the name implies, this DMV shows all the current sessions connected to your system. When I think of current sessions, I often think about queries that users are running on the server, but there’s also system activity. When I’m troubleshooting, sometimes I want to see both user and system activity, and this DMV allows me visibility to both.

Useful columns to query

login_name: Find the user who’s executing the session on your server

host_name: Find the computer that’s hosting the session

reads, writes, and logical reads: Useful for judging the total activity of one session

is_user_process: 0 for system processes, 1 for user processes

status: There’s a few different status but the most important ones are sleeping or running

program_name: The name of the program used to create this session. See the example below

Example query using this DMV

SELECT host_name, login_name, writes, reads, logical_reads, program_name
FROM sys.dm_exec_sessions as s 
WHERE s.is_user_process = 1
and s.status = 'running'
ORDER BY s.writes DESC


Is SQL Server using all of your server’s memory?

I’ve worked with a lot of great system administrators and developers. There’s one unusual thing about working with databases that’s often confusing for non-DBAs. This post is to help in those situations.

When an application is using 80-95% of a system’s memory, that’s concerning

Yes, even if it’s Chrome. Close some Chrome tabs please!

However, in the database world, your DBAs will tell you that it’s fine for SQL Server to consume lots of memory. They say that because databases want lots of memory by design. Databases use memory to store and retrieve data fast!

Your database administrators have control of how much memory is used by SQL Server

It’s right here, in SQL Server Management Studio. Right click on the server name, and go to properties.


Is 10,0000 MB the default setting?

No, I set that up for a very small server specifically for this demo. Don’t pick 10,000 just because I did.

However even in the latest version of SQL Server, the default setting is 2,147,483,647 megabytes. That does mean that SQL Server could eventually use up your entire system’s memory. Bummer.

What to do when SQL Server is using a lot of server memory

If you’re still worried about how much memory is being used by the SQL Server, check that setting. If your company has database administrators, talk to them. Check on the link for the default setting, there’s some general advice in there.

End of the post

The main moral of this story is: please don’t reboot your servers just because SQL Server is using a lot of memory. It will start to consume memory once it starts running again, and the process will repeat.

TempDB space isn’t free real estate

Using temp tables in SQL Server often seem like a great way to improve performance. But when everyone wants the same piece of land, you end up with contention or in the real world, very expensive houses.

TempDB contention example

To simulate a lot of sessions running at once, trying to use TempDB, I’m going to use the free SQLQueryStress tool written by Adam Machanic (Thanks Adam!).

Now I need a query that will consume a lot of TempDB space.

SELECT *
into #messages
FROM sys.messages

SELECT severity
FROM #messages
WHERE text = 'String or binary data would be truncated.'

I’ll run this query with 200 threads because I want a lot of contention.

 

Once I kick off that process, here’s a snippet of sp_WhoIsActive.

 

Repeat 200 times

What’s the issue?

Well, the issue is PAGELATCH_UP on the tempDB data files, in that screenshot it’s files 5 and 9. There’s a lot of writing on the internet about the best way to solve TempDB contention. If you only have one or two files in TempDB but you have lots of CPU cores, adding more TempDB files might help.

What’s the solution?

In this example, we could filter when we load into the temp table (or not even use a temp table at all). That’s one of my proposed solutions.

Once you have decided on the right number of tempDB files, you might still see this issue. That’s why I picked an example that does searching extremely inefficiently. I want to emphasize that tuning your queries will often provide much bigger benefits than trying to use administration to fix bad T-SQL.

Let’s do two things to the query from earlier. We don’t need to load into a temp table, and we don’t need to SELECT *, we just need the severity column. Both of these improvements will reduce the total amount of resources used.

SELECT severity
FROM sys.messages
WHERE text = 'String or binary data would be truncated.'

Time to take another look at sp_WhoIsActive.

What’s happening now? They’re still running?

Yes, but instead of waiting for resources, they’re all executing simultaneously.

Thanks for reading, stay tuned!

 

Don’t ignore the warning signs (in execution plans)

Some things in life we ignore. For example, the “check engine” light. That’s just there as a suggestion, right?

But when you’re performance tuning, you can’t afford to ignore the warning signs. I can’t count the number of times that I’ve found the issue with a query by looking at the warnings.

Enough talk, let’s look at an example

Let’s create a table of Students, with just one column.

CREATE TABLE [dbo].[Students]([StudentName] [varchar](100) NULL)
GO
CREATE CLUSTERED INDEX ix_Students_Name 
ON Students(StudentName)
GO

I need some dummy data to really increase the numbers here, so I’ll load in the names from StackOverflow2010.

INSERT INTO Students(StudentName)
SELECT DisplayName 
FROM StackOverflow2010.dbo.Users

Great! About 300,000 rows were loaded into my new table. Now I need to write a query. How about this query?

SELECT StudentName 
FROM Students
WHERE StudentName = N'Arthur'

35 rows returned! Disclaimer, none of those users are actually me. This post is about the execution plan, so let’s retrieve the execution plan along with the io statistics by using set statistics io on.

Table 'Students'. Scan count 1, logical reads 910, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


Ah, there’s the warning sign on the left there. Here’s the warning it’s giving us.

Type conversion in expression? What does that mean?

Well, because I used the “N'” prefix around my parameter of Arthur, I was using nvarchar as my data type, and the table’s data type is varchar. When the server received my request, it had to convert the entire column of StudentName into nvarchar before searching the list for the name I sent.

That’s called an Implicit Conversion, and this issue only gets worse as your data grows in size. Often, this issue doesn’t appear overnight. It can exist for weeks or months, just making each query run a little slower. Then in a few years, the query is slow and users start to complain. Since nothing has changed, it’s hard to track down.

Is it worth fixing?

I say absolutely yes. Let’s take a look at this example to see how much it will improve, and then I’ll talk more about the fix in the real world.

SELECT StudentName
FROM Students
WHERE StudentName = 'Arthur'
Table 'Students'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Ah, that’s better. Not only has the warning disappeared, but the index scan in the execution plan changed.

What happens in the real world?

The real world is never as easy or simple as a demo environment like this. If your company is international, I’d argue that the correct choice would be to change the data type of the column to nvarchar. That’s a lot harder than the easy fix in my demo.

At this point, when you have a slow query and there’s an implicit conversion that you believe is influencing the execution plan, it’s time to start asking questions. Start by finding your favorite application developer and ask them how they use the column.

More warning sign posts to come

I want to leave an honorable mention for the other warning signs that show up in execution plans, specifically the excessive memory grant warning. I’ll talk about that in the future, stay tuned!





Why NOLOCK could bite you

It’s no secret that a lot of DBAs don’t like the NOLOCK hint. Often, people will discuss the myriad of issues that can happen.

Introduction to NOLOCK

If you aren’t familiar with NOLOCK, it’s a query hint that can be applied to each table in a query. This hint tells SQL Server to change isolation level from Read Committed to Read Uncommitted. In plainer words, the query can now read data before and during the process of writing to the storage.

The main reason that this is bad is that your query can return bad data. That’s pretty bad.

But let me introduce you to another issue that can occur, which is blocking on the schema, AKA the wait type LCK_M_SCH_S.

What does LCK_M_SCH_S mean?

It means that the current session is trying to acquire a lock on the schema to make a change to an object in the schema. You might think that your environment doesn’t make changes to the schema too often, but there’s a few scenarios when that kind of lock is required.

Let’s say we have a production incident, and we need to drop an index. Let’s try to do that while there’s a query running on this table (using StackOverflow 2010).

SELECT * 
FROM Comments (NOLOCK)

On my machine, this query takes almost a minute to run. While this query runs, let’s create an index on this table.

CREATE NONCLUSTERED INDEX 
ix_test ON Comments(Score)

Taking a look at activity using sp_WhoIsActive, we can see that creating the index and running the query are compatible.

But here’s the tricky part: even though we could create the index, we can’t get rid of it now! Let’s try to drop it, and I’ll re-run the same Comments query first.

I also start a new “SELECT *” query, but it looks like now that I’m dropping an index, that new query is now waiting. Both queries haven’t done any work yet, just waiting for session 59 to finish.

What’s the solution?

Well, honestly there’s no easy fix once your environment has queries with NOLOCK applied in Production. You can try to avoid this issue by never dropping an index in Production, but what’s to stop another DBA or someone with sysadmin rights from creating an index or two?



One reason for using the Estimated Execution Plan

SQL Server provides performance tuners with two main methods of reviewing query performance, the estimated and actual execution plans. The estimated plan can be retrieved quickly, since the query doesn’t have to be executed. Alternatively, the actual plan will require you to execute the query entirely but it’s often worth the wait. The estimated plan is only as good as the estimates used to generate it, so as performance tuners, we want to review what actually happened.

So here’s the catch

In the real world, when you’re troubleshooting an issue, waiting for the entire query to execute is painful. If the query takes over 30 seconds, I know I’ll start to get impatient and I’ll get curious about what’s taking so long. This is where the estimated plan comes into play. While we’re waiting for our actual query to execute, let’s talk about one big reason we’d want to retrieve the estimated plan: parameters.

Parameters?

Yes, the parameters that were used to compile the plan. When SQL Server generates an execution plan, it records the parameters that were used. Here’s another tricky part: those might not be the same parameters that you’re using to test the query. Let’s take a look at an example using the Stack Overflow database.

10 second introduction to StackOverflow2010

(Download StackOverflow2010 here)

In the world of Stack Overflow, the Posts table has all the questions and answers from the site. There’s a lot of those questions and answers, but there’s just a few wiki type posts.

Here’s my stored procedure to retrieve all the posts of a certain type:

CREATE PROCEDURE [dbo].[QueryPostType] (@PostType INT) 
AS
BEGIN

SELECT PostTypeId, 
	  Tags,
	  Body 
FROM Posts where PostTypeId = @PostType

END

Now to actually test this, we need to run the stored procedure once.

EXEC QueryPostType @PostType = 1;

Alright. Executing the query with the parameter for the questions, post type of 1, should put that plan into the plan cache. Let’s take a look at the estimated plan for a different parameter, 3, which is the wiki post type.

EXEC QueryPostType @PostType = 3;

Just right click on that “SELECT” and look at the parameters. Here’s the useful part:

That’s all I have for this reason and this post.

I just wanted to provide the parameters in defense of estimated plans. This trick becomes really useful if you’re trying to determine if your query is a victim of parameter sniffing, since you can see what parameters were sniffed and placed into cache.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close