About Me

My photo
Mumbai, Maharastra, India
He has more than 7.6 years of experience in the software development. He has spent most of the times in web/desktop application development. He has sound knowledge in various database concepts. You can reach him at viki.keshari@gmail.com https://www.linkedin.com/in/vikrammahapatra/ https://twitter.com/VikramMahapatra http://www.facebook.com/viki.keshari

Search This Blog

Showing posts with label PAGE and EXTENTS. Show all posts
Showing posts with label PAGE and EXTENTS. Show all posts

Thursday, November 22, 2018

Performance of Read Ahead Read with Trace Flag 642

When user submit a query to fetch data from SQL server, database engine do a logical read to check if requested data page is holding  the requested data are present in cache if so it will do logical read and sent back to user, but if the requested data pages are not present in buffer cache then it will do a physical read which is reading from disk, this is an expensive operation involving high IO and wait type.

To avoid physical read, SQL Server has something known as Read Ahead Read, this will bring the data(data pages in buffer) even before it is requested from the query. Read Ahead Read operation is a default behavior of SQL Server.

In this post we will check the performance of Read Ahead Read compared to physical read with the help of Trace Flag 642.

I have created two set of Query, one using Read Ahead read to fetch the data and other one using physical read.

With Read Ahead Read: Here I have set the IO on to capture the plan and freed out the cache and buffer

dbcc traceoff(652,-1)
dbcc freeproccache
dbcc dropcleanbuffers
go
set statistics io on
set statistics time on 
  --select * from dbo.person   
  select * from person.address
set statistics io off
set statistics time off
go

Lets check what IO info  is saying
(19614 row(s) affected)
Table 'Address'. Scan count 1, logical reads 346, physical reads 1, read-ahead reads 344, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

 SQL Server Execution Times:
   CPU time = 93 ms,  elapsed time = 855 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

Here we can see Read Ahead Read is 150, that means to fetch 19972 records, database storage engine gets 150 eight K pages to cache before plan get executed. Now let’s check Without Read Ahead Read

dbcc traceon(652,-1)
dbcc dropcleanbuffers  --do not run this on prod
dbcc freeproccache()   --do not run this on prod
set statistics io on
set statistics time on
       --  select * from dbo.person   
       select * from person.address
set statistics io off
set statistics time off
go

Let’s check what IO info is saying

(19614 row(s) affected)
Table 'Address'. Scan count 1, logical reads 345, physical reads 233, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

 SQL Server Execution Times:
   CPU time = 141 ms,  elapsed time = 3041 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

Here clearly we can see the difference, read ahead read give better elapsed time compare to Physical Read.

Conclusion: Read Ahead Read perform well as compared to physical read in our case.

Enjy coding…SQL J 

Post Reference: Vikram Aristocratic Elfin Share

Monday, December 23, 2013

Special Storage Series – III Unrestricted-Length Object Data I


Read Previous Article:

There are mainly three kind of LOB type
  • Text
  • Ntext
  • Image

And if your table consists of any of these LOB type for your field, then by default the actual data is not stored on regular data pages, because for LOB data type SQL Server has separate pages for storing the data. Like row-overflow data, LOB data are stored in their own set of pages.

The Pages where the data of LOB data get stored are
PageType 3, LOB or row-overflow page, TEXT_MIXED
PageType4, LOB or row-overflow page, TEXT_DATA

For LOB column, SQL Server stores a 16 byte pointer in the data row that indicates where the actual data can be found i.e either PageType3 or PageType4. Although default behavior of SQL Server is to store the LOB data off the data row but SQL Server also allows you to change the default behavior of LOB data storage mechanism and allow LOB data to get stored in the data row itself if it is small enough.  

The collection of 8K pages that form the LOB data, not necessary located next to each other, they might be scattered but logically they are organized in a B-Tree Structure, so operation on LOB data type is very efficient.

Let’s take a look at how data of text type i.e. LOB type get stored in separate LOB Page, for this we are creating a table Tab_VKM_LOBTable with col1 as text data type.

create table Tab_VKM_LOBTable
(col1 text)
Command(s) completed successfully.

Let’s insert a record to the table.

insert into Tab_VKM_LOBTable
select REPLICATE('a',3000)
(1 row(s) affected)

Now we will introspect the table storage detail, ie in what all kind of pages, data of table are getting stored by querying System Catalog   sys.partitions and sys.allocation_units.

select OBJECT_NAME(p.object_id) as TableName,
p.partition_id,p.partition_number,p.rows,
au.total_pages as Pages,
au.type_desc as PageType
from sys.partitions p, sys.allocation_units au
where p.partition_id=au.container_id
and p.object_id=OBJECT_ID('Tab_VKM_LOBTable')

TableName        partition_id      partition_number rows Pages PageType
--------------------------------------------------- ----------------------
Tab_VKM_LOBTable 72057594041008128 1                1    2     IN_ROW_DATA
Tab_VKM_LOBTable 72057594041008128 1                1    2     LOB_DATA

Now from the output we can see there are two pages for each Page Type. We can fire DBCC IND query to and see the detail of each pages. Here we are using our custom table which we created in our last post (Special Storage Series – II Restricted-Length Object Data (Row Overflow Data) – II.A) and store the output from DBCC IND to the custom table.

Here we are executing DBCC IND for ‘Tab_VKM_LOBTable' and storing the output in out custom made TAB_VKM_TablePages table.

insert into TAB_VKM_TablePages
exec ('dbcc ind (''master'',''Tab_VKM_LOBTable'',-1)')

select PageFID,PagePID,OBJECT_NAME(objectId) as [Table],
iam_chain_type,PageType from TAB_VKM_TablePages

PageFID PagePID Table            iam_chain_type PageType
------- ------------------------ -------------- --------
1       184     Tab_VKM_LOBTable In-row data    10
1       183     Tab_VKM_LOBTable In-row data    1
1       182     Tab_VKM_LOBTable LOB data       10
1       181     Tab_VKM_LOBTable LOB data       3

(4 row(s) affected)

From the output we can observe two pages for LOB Data and two pages for In-row data. The Data page hold a 16byte pointer to LOB Page. The descriptions for page type are

PageType =  1, Data Page
PageType =  2, Index Page
PageType =  3, LOB, Row-Overflow page, TEXT_MIXED
PageType =  4, LOB, Row-Overflow page, TEXT_DATA
PageType =  10, IAM Page

Here we can make out there are one data page and one IAM page for both in-row-data and LOB-data.

It is also possible to store LOB data in in-row data pages; we will see this in coming article from this series.

Code confidently in the course of your logic and other will bow their head to the direction of your code triumph.


Post Reference: Vikram Aristocratic Elfin Share

Sunday, December 22, 2013

Special Storage Series – II Restricted-Length Object Data (Row Overflow Data) – II.C Update statement on row-overflow pages data.


Here in this article we will take on with the previous article to see the affect of Update statement on data rows which are spanning in many row-overflow pages.

Let’s take a look at a table with each variable length data type column size of more than 8K bytes.

create table Tab_VKM_BigTable_III
(col1 varchar(3000),
col2 varchar(8000),
col3 varchar(8000),
col4 varchar(8000))
Command(s) completed successfully.

Our table is ready whose maximum record size to store is greater than 27000 byte. Let’s insert data with size more than 27K byte.

insert into Tab_VKM_BigTable_III
select REPLICATE('a',3000),REPLICATE('b',8000),
       REPLICATE('c',8000),REPLICATE('d',8000)
(1 row(s) affected)

Now we have record whose length is more than 27000 bytes. You can query the table to see the record
select * from  Tab_VKM_BigTable

Now we will introspect the table storage detail, i.e. In what all kind of page, data of table are getting stored by querying System Catalog   sys.partitions and sys.allocation_units.

sys.partitions will hold a single entry irrespective of whether you apply partitioning on table or not. It will give Partition Id and Partition Number and number of rows in the table. Partiton number is 1 for Non Partitioned   Table.

sys.allocation_units will hold information about how data are allocated on various pages like page Type, number of Pages etc.

select OBJECT_NAME(p.object_id) as TableName,
p.partition_id,p.partition_number,p.rows,
au.total_pages as Pages,
au.type_desc as PageType
from sys.partitions p, sys.allocation_units au
where p.partition_id=au.container_id
and p.object_id=OBJECT_ID('Tab_VKM_BigTable_III')

TableName           partition_id         parti_no rows Pages PageType
------------------- -------------------- -------- ---- ----- -----------------
Tab_VKM_BigTable_II 72057594042449920    1         1   2     IN_ROW_DATA
Tab_VKM_BigTable_II 72057594042449920    1         1   4     ROW_OVERFLOW_DATA

(2 row(s) affected)

Now from the output we can see there are four pages for row-overflow, one for row-overflow IAM and three for data that doesn’t fit in regular row.

Now we will fire Update statement on col2 of the table Tab_VKM_BigTable_III and reduce the size of the column data to 3000 from 8000 byte.

update Tab_VKM_BigTable_III
set col2 = REPLICATE('b',3000)
(1 row(s) affected)

Let’s query now sys.partitions and sys.allocation_units to see the effect of Update Query

select OBJECT_NAME(p.object_id) as TableName,
p.partition_id,p.partition_number,p.rows,
au.total_pages as Pages,
au.type_desc as PageType
from sys.partitions p, sys.allocation_units au
where p.partition_id=au.container_id
and p.object_id=OBJECT_ID('Tab_VKM_BigTable_III')

TableName           partition_id         parti_no rows Pages PageType
------------------- -------------------- -------- ---- ----- -----------------
Tab_VKM_BigTable_II 72057594042449920    1         1   2     IN_ROW_DATA
Tab_VKM_BigTable_II 72057594042449920    1         1   3     ROW_OVERFLOW_DATA

(2 row(s) affected)

Here we can see one ROW_OVERFLOW_DATA is removed so now it 3 row-overflow data pages where one is IAM and other two are used to store data.

Lets now again update the col2 and col3 to 1 byte data and check the data span over Pages.

update Tab_VKM_BigTable_II
set col2 = REPLICATE('b',1)
,col3 = REPLICATE('c',1)

Let’s query now sys.partitions and sys.allocation_units to see the effect of Update Query

select OBJECT_NAME(p.object_id) as TableName,
p.partition_id,p.partition_number,p.rows,
au.total_pages as Pages,
au.type_desc as PageType
from sys.partitions p, sys.allocation_units au
where p.partition_id=au.container_id
and p.object_id=OBJECT_ID('Tab_VKM_BigTable_III')

TableName           partition_id         parti_no rows Pages PageType
------------------- -------------------- -------- ---- ----- -----------------
Tab_VKM_BigTable_II 72057594042449920    1         1   2     IN_ROW_DATA
Tab_VKM_BigTable_II 72057594042449920    1         1   2     ROW_OVERFLOW_DATA

Here we can see one ROW_OVERFLOW_DATA is removed so now it 2 row-overflow data pages where one is IAM and other one is used to store data

Conclusion: Update statement on row-overflow pages data cause the number of pages reduced or increased.

Programmer just wanted to play with code; this is the highest honor that can ever be paid and its mind blowing.


Post Reference: Vikram Aristocratic Elfin Share

Special Storage Series – II Restricted-Length Object Data (Row Overflow Data) – II.B One row can span many row-overflow pages.


Read Previous Article:

Here in this article we will take on with the previous article to see how one row can span many row-overflow pages if it contains many large variable length fields in the table. Lets take a look at a table with each variable length data type column size of more than 8K bytes.

create table Tab_VKM_BigTable_II
(col1 varchar(3000),
col2 varchar(8000),
col3 varchar(8000),
col4 varchar(8000))
Command(s) completed successfully.

Our table is ready whose maximum record size to store is greater than 27000 byte. Let’s insert data with size more than 27K byte.

insert into Tab_VKM_BigTable_II
select REPLICATE('a',3000),REPLICATE('b',8000),
       REPLICATE('c',8000),REPLICATE('d',8000)
(1 row(s) affected)

Now we have record whose length is more than 27000 bytes. You can query the table to see the record
select * from  Tab_VKM_BigTable

Now we will introspect the table storage detail, i.e. In what all kind of page, data of table are getting stored by querying System Catalog   sys.partitions and sys.allocation_units.

sys.partitions will hold a single entry irrespective of whether you apply partitioning on table or not. It will give Partition Id and Partition Number and number of rows in the table. Partiton number is 1 for Non Partitioned   Table.

sys.allocation_units will hold information about how data are allocated on various pages like page Type, number of Pages etc.

select OBJECT_NAME(p.object_id) as TableName,
p.partition_id,p.partition_number,p.rows,
au.total_pages as Pages,
au.type_desc as PageType
from sys.partitions p, sys.allocation_units au
where p.partition_id=au.container_id
and p.object_id=OBJECT_ID('Tab_VKM_BigTable_II')

TableName           partition_id         parti_no rows Pages PageType
------------------- -------------------- -------- ---- ----- -----------------
Tab_VKM_BigTable_II 72057594042449920    1         1   2     IN_ROW_DATA
Tab_VKM_BigTable_II 72057594042449920    1         1   4     ROW_OVERFLOW_DATA

(2 row(s) affected)

Now from the output we can see there are four pages for row-overflow, one for row-overflow IAM and three for data that doesn’t fit in regular row. We can fire DBCC IND query to and see the detail of each pages.

Here we will be using the table TAB_VKM_TablePages which we have created in our last post (Special Storage Series – II Restricted-Length Object Data (Row Overflow Data) – II.A) to store the result of DBCC IND.

insert into TAB_VKM_TablePages
exec ('dbcc ind (''Sparsh'',''Tab_VKM_BigTable_II'',-1)')
(6 row(s) affected)

Now let’s query the TAB_VKM_TablePages to see the DBCC IND data for the table Tab_VKM_BigTable_II

select PageFID,PagePID,OBJECT_NAME(objectId) as [Table],
iam_chain_type,PageType from TAB_VKM_TablePages

PageFID PagePID Table               iam_chain_type     PageType
------- ---------------------------------------------- --------
1       369     Tab_VKM_BigTable_II In-row data        10
1       368     Tab_VKM_BigTable_II In-row data        1
1       357     Tab_VKM_BigTable_II Row-overflow data  10
1       356     Tab_VKM_BigTable_II Row-overflow data  3
1       358     Tab_VKM_BigTable_II Row-overflow data  3
1       359     Tab_VKM_BigTable_II Row-overflow data  3

(6 row(s) affected)

From the output we can observe there are two pages for In-Row Data, one is IAM and another is data page. And there are four pages for Row-Overflow Data, one is IAM page and other three are used to store the data.

Conclusion: One row can span many row-overflow pages.

It’s fun to be around Code that has wonderful potency to bring life to tranquil object.



Post Reference: Vikram Aristocratic Elfin Share