Tuesday 14 August 2012

SQLServer Database Performance Analysis Key Points


Essential for Performance Tuning
  
It requires improving performance of SQL Server database.
  1. It is used to maximize the use of system resources to perform work as efficiently as rapidly as possible.
  2. It is used to reduce time taken for retrieving data from database.
  3. It requires increasing the application performance.
  4. It is used to reduce dead lock occurrences.

Activity Monitor – SQL 2008

  1. Activity Monitor provides graphical view of recently executed processes, resource waits, data file I/O and recent expensive queries.
  2. Point-in-time reports can be fetched from Activity Monitor. ( Pause/resume activity monitor by simple right click)
  3. It is used to get recent expensive queries (line item – to display full text/graphical execution plan)
  4. Process View (Execute profiler trace/ Kill the process)
  5. Profiler Events (RPC:Completed, SQL:BatchStarting, and SQL:BatchCompleted events, and Audit Login and Audit Logout)
  6. DMV's: sys.dm_os_wait_stats, sys.dm_os_waiting_tasks and sys.dm_tran_locks

SQL Audit Through Extended Events – SQL 2008

  1. Audit results can be sent to a file/event log
  2. Server Level Audit group actions
A.    FAILED_LOGIN_GROUP, which tracks failed logins.
B.    BACKUP_RESTORE_GROUP, which shows when a database was backed up or restored.
C.    DATABASE_CHANGE_GROUP, which audits when a database is created, altered, or dropped.
  1. Database Level Audit group actions
A.    DATABASE_OBJECT_ACCESS_GROUP, which is raised whenever a CREATE, ALTER, or DROP statement is executed on database objects.
B.    DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, which is raised when GRANT, REVOKE, or DENY is utilized for database objects.

Backup Compression – SQL 2008

  1. Less Storage, Less Disk I/O and time
  2. It is useful for reduction in backup file size between 70% and 85%.
  3. High Processor utilization may cause due to additional processing results.
  
Transaction Log Growth

Steps for quick way to shrink transaction log growth:
1.   Change Database recover mode from Full to Simple

2.   Truncate the log file by performing full backup of DB

3.   Switch back to full recovery mode

Process in Simple Recovery Model Transaction Log backup

Checkpoint  >> Truncate inactive portion of T-Log >> Remains only active portion


TRUNCATE instead of DELETE

  1. Truncate can be used instead of Delete statement, if table size is very large.
  2. Truncate is not logged operation. Only page re-allocation details would be logged.
  3. Statistics needs to be updated manually
  4. Foreign key references have to be deleted before issuing truncate statement.

UMS (User Mode Scheduler)

  1. SQL Server’s own internal thread scheduler.
  2. It is used to identify CPU’s bottlenecks.
  3. It performs similar to System Object: Processor Queue Length in Performance monitor.
  4. It focuses only on SQL Server threads not on all threads running on the server.
  5. DBCC SQLPERF (UMSSTATS)
  6. For every CPU, a scheduler will be assigned which is identified by number starting from zero (0).
  7. Attributes are given below.
    1. num users  -- Number of Sql threads for a specific scheduler.
    2. num runnable     --Scheduler  Queue length – number of threads waiting   to run ( >=0 indicates CPU bottleneck))

I/O Activity – DB File

1.   It is table valued function to provide statistical information on the I/O activity of specific database file.
>> SELECT * FROM:: fn_virtualfilestats(dbid, fileid)
1.  NumberReads: The number of physical reads made against this file since the last time SQL Server was restarted.
2.  NumberWrites: The number of physical writes made against this file since the last time SQL Server was restarted.
3.  IoStallMS: The total amount of time that users have waited for I/Os to complete for this file (in milliseconds).
>>  If (IoStallMS / (NumberReads+NumberWrites) ) > 20ms, then it indicates I/O to the transaction log is becoming a bottleneck.


Memory (RAM) availability

Quick/dirty way to check in Task Manager (Performance Tab):
  1. If “Total” under “Commit Charge (k)” > the “Total” under “Physical Memory (k)”, then server does not have enough RAM to run efficiently and it may cause more paging.
>>   “Available Physical Memory (K)” should be greater than 4MB


Disk I/O

1.   It can found by checking the total “number of I/O Read Bytes” and “I/O Write Bytes” from sqlservr.exe process at Task Manager.
2.   RAID 5 is more efficient for Reads
3.   RAID 10 is more efficient for Writes.
4.   Case: SQL Server has 415,006,801,908 I/O bytes read and 204,669,746,458 bytes written. This server has about one write for every two reads. In this case, RAID 5 is probably a good compromise in performance.

Before experimenting performance

  1. It is recommended to run below statements before experimenting performance of SQL Servers.
a)   CHECKPOINT – to clean dirty buffer from Data cache
b)   DBCC DROPCLEANBUFFERS – removes clean buffers.
c)   DBCC FREEPROCCACHE – to clear out all SP’s cache for whole server/instance.
d)   DBCC FLUSHPROCINDB – to clear out SP’s cache for single DB

Orphan SQL Server Session

  1. It negatively affect SQL Server’s performance
  2. It occurs, when a client improperly disconnects from SQL Server, such as when the client loses power
  3. It affects SQL Server Performance in 2 ways.
    1. First, orphaned connections use up SQL Server connections, which takes up server resources.
    2. Secondly, it is possible that the orphan connections may be holding locks that block other users; or temp tables or cursors may be held open that also take up unnecessary server resources.
  4. Operating System periodically (1-2 hrs) checks/notify for in-active SQL Server sessions.
  5. Also identified by executing “SP_WHO2” (blk – session id for in-active session blocking process (-2))

Avoid using “count (*)”

  1. SELECT COUNT (*) from <table_name>
                                         i.    >>  1:09 mins to count 10,725,948 rows in the table
  1. SELECT rows
    FROM sysindexes
    WHERE id = OBJECT_ID(‘<table_name>’) AND indid < 2
                                         i.    >>  It took less than a second to count 10,725,948 rows in the table
Note: Statistics needs to be updated

ALTER schema for large tables

1.   Altering schema through SSMS takes much longer than performing the same through T-SQL ALTER command.
2.   SSMS may not use ALTER but instead recreate an entire new table based on your new schema, and the move the data from the old table to the new table, which can take a long time for large tables.

Data Partitioning

  1. Table and Index partitioning can be performed.
  2. It is used to efficiently manage the large tables and indexes.
  3. Data partitioning improves the performance and increases availability of data.
        Step 1

Let us assume that we have a database, "Data Partition DB," with two different file groups, as shown below.

CREATE DATABASE [Data Partition DB]      ON PRIMARY
(
NAME='Data Partition DB Primary FG',
FILENAME= 'C: \Data\Primary\Data Partition DB Primary FG.mdf',
SIZE=5,  
MAXSIZE=500,
FILEGROWTH=1
),

FILEGROUP [Data Partition DB FG1]
(
NAME = 'Data Partition DB FG1',
FILENAME = 'C: \Data\FG1\Data Partition DB FG1.ndf',
SIZE = 5MB,
MAXSIZE=500,  
FILEGROWTH=1
),

FILEGROUP [Data Partition DB FG2]
(
NAME = 'Data Partition DB FG2',
FILENAME = ‘D: \Data\FG2\Data Partition DB FG2.ndf',
SIZE = 5MB,       
MAXSIZE=500,       
FILEGROWTH=1
)

 STEP 2

Use [Data Partition DB]
GO
CREATE PARTITION FUNCTION
[Data Partition Range](int)
AS RANGE LEFT FOR VALUES (100)

STEP 3 (Associate Partition function with proper file group)

USE [Data Partition DB]
GO
CREATE PARTITION SCHEME
[Data Partition Scheme]
AS PARTITION [Data Partition Range]
TO ([Data Partition DB FG1],
        [Data Partition DB FG2]);

STEP 4 (Create table should contain partition scheme and partition key)

USE [Data Partition DB] go
CREATE TABLE MyTable
     (ID INT NOT NULL,
     Date DATETIME,
     Cost money)
     ON [Data Partition Scheme] (ID);

STEP 5

USE [Data Partition DB]
go
CREATE UNIQUE CLUSTERED INDEX MyTable_IXC
     ON MyTable(ID)
     ON [Data Partition Scheme] (ID)

STEP 6

USE [Data Partition DB]
go
declare @count int
set @count =1
while @count <=100
begin
     insert into MyTable select @count,getdate(),100.00
set @count=@count+1
end
set @count =102
while @count <=202
begin
insert into MyTable select @count,getdate(),200.00
set @count=@count+1
end

Microsoft Catalog views for Data Partitioning

1.   select * from sys.partition_functions
2.   select * from sys.partition_parameters
3.   select * from sys.partition_range_values
4.   select * from sys.partition_schemes
5.   select * from sys.partitions

ISOLATION LEVELS

Isolation Levels in SQL Server
  1. Read Committed [ Default isolation >> Releases shared lock on reads after finishing the select statement].
  2. Read UnCommitted [Lowest level >> No shared lock]
  3. Repeatable Read [ Shared lock till end of transaction]
  4. SERIALIZABLE [Highest Level [Isolation increases but concurrency decreases >> Use Key-Range locks >> Prevents insertion of new rows into a range of values till end of transaction ].
  5. Snapshot Isolation
a)     Non-blocking Read Committed.
b)     Reduces blocking reads and allows transactions to read previously committed version of data.
c)     Enable snapshot isolation at the database level
d)     After enabling it stores copies of database data changes to tempdb
e)     Timestamp of the change linked to a copy of the previously committed row in tempdb [Linked list].

No comments:

Post a Comment