Essential for
Performance Tuning
It requires improving performance of SQL Server database.
- It is used to maximize the use of system resources to perform work as efficiently as rapidly as possible.
- It is used to reduce time taken for retrieving data from database.
- It requires increasing the application performance.
- It is used to reduce dead lock occurrences.
Activity Monitor
– SQL 2008
- Activity Monitor provides graphical view of recently executed processes, resource waits, data file I/O and recent expensive queries.
- Point-in-time reports can be fetched from Activity Monitor. ( Pause/resume activity monitor by simple right click)
- It is used to get recent expensive queries (line item – to display full text/graphical execution plan)
- Process View (Execute profiler trace/ Kill the process)
- Profiler Events (RPC:Completed, SQL:BatchStarting, and SQL:BatchCompleted events, and Audit Login and Audit Logout)
- DMV's: sys.dm_os_wait_stats, sys.dm_os_waiting_tasks and sys.dm_tran_locks
SQL Audit Through
Extended Events – SQL 2008
- Audit results can be sent to a file/event log
- Server Level Audit group actions
A. FAILED_LOGIN_GROUP, which tracks failed logins.
B. BACKUP_RESTORE_GROUP, which shows when a database was backed up
or restored.
C. DATABASE_CHANGE_GROUP, which audits when a database is created,
altered, or dropped.
- Database Level Audit group actions
A. DATABASE_OBJECT_ACCESS_GROUP, which is raised whenever a CREATE,
ALTER, or DROP statement is executed on database objects.
B. DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, which is raised when
GRANT, REVOKE, or DENY is utilized for database objects.
Backup
Compression – SQL 2008
- Less Storage, Less Disk I/O and time
- It is useful for reduction in backup file size between 70% and 85%.
- High Processor utilization may cause due to additional processing results.
Transaction Log
Growth
Steps for quick way to shrink transaction log growth:
1. Change
Database recover mode from Full to Simple
2. Truncate the
log file by performing full backup of DB
3. Switch back to
full recovery mode
Process in
Simple Recovery Model Transaction Log backup
Checkpoint
>> Truncate inactive portion of T-Log >> Remains only active
portion
TRUNCATE instead
of DELETE
- Truncate can be used instead of Delete statement, if table size is very large.
- Truncate is not logged operation. Only page re-allocation details would be logged.
- Statistics needs to be updated manually
- Foreign key references have to be deleted before issuing truncate statement.
UMS (User Mode
Scheduler)
- SQL Server’s own internal thread scheduler.
- It is used to identify CPU’s bottlenecks.
- It performs similar to System Object: Processor Queue Length in Performance monitor.
- It focuses only on SQL Server threads not on all threads running on the server.
- DBCC SQLPERF (UMSSTATS)
- For every CPU, a scheduler will be assigned which is identified by number starting from zero (0).
- Attributes are given below.
- num users -- Number of Sql threads for a specific scheduler.
- num runnable --Scheduler Queue length – number of threads waiting to run ( >=0 indicates CPU bottleneck))
I/O Activity – DB
File
1. It is table valued function to
provide statistical information on the I/O activity of specific database file.
>> SELECT * FROM:: fn_virtualfilestats(dbid, fileid)
1. NumberReads: The number of physical
reads made against this file since the last time SQL Server was restarted.
2. NumberWrites: The number of
physical writes made against this file since the last time SQL Server was
restarted.
3. IoStallMS: The total amount of
time that users have waited for I/Os to complete for this file (in
milliseconds).
>> If (IoStallMS / (NumberReads+NumberWrites) ) > 20ms, then it
indicates I/O to the transaction log is becoming a bottleneck.
Memory (RAM)
availability
Quick/dirty way to check in Task Manager (Performance Tab):
- If “Total” under “Commit Charge (k)” > the “Total” under “Physical Memory (k)”, then server does not have enough RAM to run efficiently and it may cause more paging.
>> “Available Physical Memory (K)”
should be greater than 4MB
Disk I/O
1. It can found by checking the total “number of I/O Read Bytes”
and “I/O Write Bytes” from sqlservr.exe process at Task Manager.
2. RAID 5 is more efficient for Reads
3. RAID 10 is more efficient for Writes.
4. Case: SQL Server has 415,006,801,908
I/O bytes read and 204,669,746,458 bytes written. This server has about one
write for every two reads. In this case, RAID 5 is probably a good compromise
in performance.
Before
experimenting performance
- It is recommended to run below statements before experimenting performance of SQL Servers.
a) CHECKPOINT – to clean dirty
buffer from Data cache
b) DBCC DROPCLEANBUFFERS – removes
clean buffers.
c) DBCC FREEPROCCACHE – to clear
out all SP’s cache for whole server/instance.
d) DBCC FLUSHPROCINDB – to clear
out SP’s cache for single DB
Orphan SQL Server
Session
- It negatively affect SQL Server’s performance
- It occurs, when a client improperly disconnects from SQL Server, such as when the client loses power
- It affects SQL Server Performance in 2 ways.
- First, orphaned connections use up SQL Server connections, which takes up server resources.
- Secondly, it is possible that the orphan connections may be holding locks that block other users; or temp tables or cursors may be held open that also take up unnecessary server resources.
- Operating System periodically (1-2 hrs) checks/notify for in-active SQL Server sessions.
- Also identified by executing “SP_WHO2” (blk – session id for in-active session blocking process (-2))
Avoid using
“count (*)”
- SELECT COUNT (*) from <table_name>
i. >> 1:09 mins to count 10,725,948
rows in the table
- SELECT rows
FROM sysindexes
WHERE id = OBJECT_ID(‘<table_name>’) AND indid < 2
i. >> It took less than a second to
count 10,725,948 rows in the table
Note: Statistics needs to be updated
ALTER schema for
large tables
1. Altering schema through SSMS takes much longer than performing
the same through T-SQL ALTER command.
2. SSMS may not use ALTER but instead recreate an entire new table
based on your new schema, and the move the data from the old table to the new
table, which can take a long time for large tables.
Data Partitioning
- Table and Index partitioning can be performed.
- It is used to efficiently manage the large tables and indexes.
- Data partitioning improves the performance and increases availability of data.
Step 1
Let us assume that we have a
database, "Data Partition DB," with two different file groups,
as shown below.
CREATE DATABASE [Data Partition
DB] ON PRIMARY
(
NAME='Data Partition DB Primary
FG',
FILENAME= 'C: \Data\Primary\Data
Partition DB Primary FG.mdf',
SIZE=5,
MAXSIZE=500,
FILEGROWTH=1
),
FILEGROUP [Data Partition DB
FG1]
(
NAME = 'Data Partition DB FG1',
FILENAME = 'C: \Data\FG1\Data
Partition DB FG1.ndf',
SIZE = 5MB,
MAXSIZE=500,
FILEGROWTH=1
),
FILEGROUP [Data Partition DB
FG2]
(
NAME = 'Data Partition DB FG2',
FILENAME = ‘D: \Data\FG2\Data
Partition DB FG2.ndf',
SIZE = 5MB,
MAXSIZE=500,
FILEGROWTH=1
)
STEP 2
Use [Data Partition DB]
GO
CREATE PARTITION FUNCTION
[Data Partition Range](int)
AS RANGE LEFT FOR VALUES (100)
STEP 3
(Associate Partition function with proper file group)
USE [Data Partition DB]
GO
CREATE PARTITION SCHEME
[Data Partition Scheme]
AS PARTITION [Data Partition
Range]
TO ([Data Partition DB FG1],
[Data
Partition DB FG2]);
STEP 4 (Create table should contain partition scheme and
partition key)
USE [Data Partition DB] go
CREATE TABLE MyTable
(ID
INT NOT NULL,
Date
DATETIME,
Cost
money)
ON
[Data Partition Scheme] (ID);
STEP 5
USE [Data Partition DB]
go
CREATE UNIQUE CLUSTERED INDEX
MyTable_IXC
ON
MyTable(ID)
ON
[Data Partition Scheme] (ID)
STEP 6
USE [Data Partition DB]
go
declare @count int
set @count =1
while @count <=100
begin
insert
into MyTable select @count,getdate(),100.00
set @count=@count+1
end
set @count =102
while @count <=202
begin
insert into MyTable select
@count,getdate(),200.00
set @count=@count+1
end
Microsoft Catalog views for Data Partitioning
1. select * from sys.partition_functions
2. select * from sys.partition_parameters
3. select * from sys.partition_range_values
4. select * from sys.partition_schemes
5. select * from sys.partitions
ISOLATION LEVELS
Isolation Levels in SQL Server
- Read Committed [ Default isolation >> Releases shared lock on reads after finishing the select statement].
- Read UnCommitted [Lowest level >> No shared lock]
- Repeatable Read [ Shared lock till end of transaction]
- SERIALIZABLE [Highest Level [Isolation increases but concurrency decreases >> Use Key-Range locks >> Prevents insertion of new rows into a range of values till end of transaction ].
- Snapshot Isolation
a) Non-blocking Read Committed.
b) Reduces blocking reads and allows transactions to read
previously committed version of data.
c) Enable snapshot isolation at the database level
d) After enabling it stores copies of database data changes to
tempdb
e) Timestamp of the change linked to a copy of the previously
committed row in tempdb [Linked list].
No comments:
Post a Comment