About Me

My photo
Mumbai, Maharastra, India
He has more than 7.6 years of experience in the software development. He has spent most of the times in web/desktop application development. He has sound knowledge in various database concepts. You can reach him at viki.keshari@gmail.com https://www.linkedin.com/in/vikrammahapatra/ https://twitter.com/VikramMahapatra http://www.facebook.com/viki.keshari

Search This Blog

Thursday, November 22, 2018

Performance of Read Ahead Read with Trace Flag 642

When user submit a query to fetch data from SQL server, database engine do a logical read to check if requested data page is holding  the requested data are present in cache if so it will do logical read and sent back to user, but if the requested data pages are not present in buffer cache then it will do a physical read which is reading from disk, this is an expensive operation involving high IO and wait type.

To avoid physical read, SQL Server has something known as Read Ahead Read, this will bring the data(data pages in buffer) even before it is requested from the query. Read Ahead Read operation is a default behavior of SQL Server.

In this post we will check the performance of Read Ahead Read compared to physical read with the help of Trace Flag 642.

I have created two set of Query, one using Read Ahead read to fetch the data and other one using physical read.

With Read Ahead Read: Here I have set the IO on to capture the plan and freed out the cache and buffer

dbcc traceoff(652,-1)
dbcc freeproccache
dbcc dropcleanbuffers
go
set statistics io on
set statistics time on 
  --select * from dbo.person   
  select * from person.address
set statistics io off
set statistics time off
go

Lets check what IO info  is saying
(19614 row(s) affected)
Table 'Address'. Scan count 1, logical reads 346, physical reads 1, read-ahead reads 344, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

 SQL Server Execution Times:
   CPU time = 93 ms,  elapsed time = 855 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

Here we can see Read Ahead Read is 150, that means to fetch 19972 records, database storage engine gets 150 eight K pages to cache before plan get executed. Now let’s check Without Read Ahead Read

dbcc traceon(652,-1)
dbcc dropcleanbuffers  --do not run this on prod
dbcc freeproccache()   --do not run this on prod
set statistics io on
set statistics time on
       --  select * from dbo.person   
       select * from person.address
set statistics io off
set statistics time off
go

Let’s check what IO info is saying

(19614 row(s) affected)
Table 'Address'. Scan count 1, logical reads 345, physical reads 233, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

 SQL Server Execution Times:
   CPU time = 141 ms,  elapsed time = 3041 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

Here clearly we can see the difference, read ahead read give better elapsed time compare to Physical Read.

Conclusion: Read Ahead Read perform well as compared to physical read in our case.

Enjy coding…SQL J 

Post Reference: Vikram Aristocratic Elfin Share

Monday, November 19, 2018

Logical Read, Physical Read and Buffer Cash hit

Logical Reads: 
This is also known as cache hit, which means reading pages from cache memory instead of disk. Logical reads in 
Logical read specifies total number of data pages needed to be accessed from data cache to process query. It is very possible that logical read will access same data pages many times, so count of logical read value may be higher than actual number of pages in a table. Usually the best way to reduce logical read is to apply correct index or to rewrite the query.

Physical Reads 
Physical read indicates total number of data pages that are read from disk. In case no data in data cache, the physical read will be equal to number of logical read. And usually it happens for first query request. And for subsequent same query request the number will be substantially decreased because the data pages have been in data cache.

Buffer Cash Hit Ratio
The more logical read count, better would be cash hit ratio.
(logical reads – physical reads)/logical read * 100%. The high buffer hit ratio (if possible to near 100%) indicates good database performance.

Some warning on high number of logical read:
Higher number of Logical Reads tends high memory usage, but there are various way by which we can reduce higher number of logical read
  1. Remove Improper/Useless/Insufficient Indexes: Indexes should be build on the basis of data access or retrieval process if any of the indexes is build on the columns which are not used in a query will leads to High Logical reads and will degrade the performance while reads and writing the data....
  2.  Poor Fill Factor/Page Density: Page use should not be very less. otherwise large number of page will be used for small amount of data which will also leads to High Logical Reads....
  3. Wide Indexes: Indexing on the large number of columns will leads to high logical reads....
  4. Index scanning: if query is leads to index scanning on the table then logical reads will be high...


Logical Reads count can be get by using following ways
  1.     set statistics io on 
  2.          sys.dm_exec_query_Stats
  3.          SQL Profiler: by executing the sql profiler on that database we can find out logical reads..


Example
set statistics io on
set statistics time on
    select * from dbo.person

    select * from dbo.person
set statistics io off
set statistics time off
go
(19972 row(s) affected)
Table 'Person'. Scan count 1, logical reads 150, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 15 ms,  elapsed time = 963 ms.

(19972 row(s) affected)
Table 'Person'. Scan count 1, logical reads 150, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 564 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.
Enjy coding…SQL J


Post Reference: Vikram Aristocratic Elfin Share

Tuesday, July 31, 2018

Character family datatype and NULL in Union statement

Conclusion: The default datatype of NULL is INT (wait lets first prove it, we might be wrong).

Yesterday my teammate called me and showed me a strange behavior of NULL with character datatype in union clause (in Teradata database) and this strange behavior is applicable to most of databases (NOT in SQL Server  J). Let’s try to stimulate the same scenario.

Here we have below Union statement
select 'a' as col1, 'b' as col2, 100 as col3, 200 as col4
union
select 'x' as col1, 'y' as col2, null, null
col1 col2 col3        col4
---- ---- ----------- -----------
a    b    100         200
x    y    NULL        NULL

(2 row(s) affected)

There are four columns in query, first two columns are of character family datatype and col3 and col4 are from Integer family datatype.

Now look at the second query

select 'x' as col1, 'y' as col2, null, null

Here, notice that first two columns are of char datatype and rest two columns are of NULL and union worked perfectly.

Lets revise the union rule:
1.        Column number should match in all select query involved in union
2.        The datatype of column in one select query should match with other select query participating in UNION.

Now return back to our first select statement in UNION query
select 'a' as col1, 'b' as col2, 100 as col3, 200 as col4

So by Union rule book, any query doing union with above query should have first two columns as character datatype and last two columns as INT datatype.

Now if you see our second query

select 'a' as col1, 'b' as col2, 100 as col3, 200 as col4
union
select 'x' as col1, 'y' as col2, null, null

First two columns are of character datatype which matches with the datatype of first query and unlike first two columns, last two are NULL which doesn’t match with the datatype of first query col3 and col4 datatype. Still query work fine.

There could be two reason why query worked fine:
  1.  NULL is compatable with INT datatype
  2.             The is an implicit conversion happening with NULL and INT datatype, something like this ( cast( NULL as INT)


Let dig it further by rewriting the query

select 'a' as col1, 'b' as col2, 100 as col3, 200 as col4
union
select 'x', NULL, NULL, NULL


Above query fails, that could means
  1. Compiler is able to do implicit conversion of NULL to character database
  2. NULL is not compatible to Character family datatype.


Now let’s rewrite the query and cast the NULL in second column of second query
select 'a' as col1, 'b' as col2, 100 as col3, 200 as col4
union
select 'x', cast (NULL as varchar(1)), NULL, NULL
col1 col2 col3        col4
---- ---- ----------- -----------
a    b    100         200
x    NULL NULL        NULL

The query work fine.

We can inference from above set of query that NULL is compatible with INT but not with CHAR datatype.

Let find out the root clause of this.

The question must be in your mind must be: Does NULL has any DEFAULT datatype? Lets try to find out this answer.

Here I am storing result of query in temporary table #tempotable

select null as col1 into #tempotable

Lets now see the table and column property of #tempotable
TABLE_QUALIFIER      TABLE_OWNER   COLUMN_NAME   TYPE_NAME     PRECISION     LENGTH
-----------------------------------------------------------------------
tempdb               dbo            col1          int           10            4

J here we can see Datatype of NULL is treated as INT Type with 4 byte length.

What we concluded, default datatype of NULL is INT??

Wait before concluding anything so early J

Let declare a variable of variant type and assign it with NULL

declare @var sql_variant = NULL

Now let find out the datatype of @Var variable

select sql_variant_property(@var, 'Basetype') as TypeName,
     sql_variant_property(@var, 'Precision') as Precision,

Output:
Type Name    Precision
------------------------------
NULL              NULL               

Now this shows NULL has no datatype J

What we concluded: NULL has no datatype, but when it takes part in forming result set, the compiler consider NULL as INT datatype with 4 byte length and 10 precision.

NOTE: SQL Server 2005 had this issue, in all later versions they rectified this issue and have high degree of NULL handling Enjy coding…SQL J



Post Reference: Vikram Aristocratic Elfin Share