Thursday 19 April 2012

When we use SP_UPDATESTATS

 

We have common doubts when to update statistics, before index or after index. To resolve this query we must understand, what is the statistics is? Why we need the statistics? If we can solve this problem we must solve our previous query. In this article I am trying to solve this problem.

The query optimizer uses the statistics to create the query plan to improve the query performance. The query optimizer automatically generates the necessary statistics to build high quality query plan to improve the performance of the query.

In fewer cases we need to create statistics for better result.

The statistics of the query optimization are objects that contain statistical information about the distribution of values in one or more columns of a table or indexed views.

The query optimizer use this statistics to estimates the cardinality, or number of rows, in the query result. These cardinality estimates enable the query optimizer to create a high-quality query plan.  The query optimizer could use cardinality estimates to choose the index seek operator instead index scan operator, and in doing so improve query performance.

We can use the following query to determine that the query optimizer has created statistics for a query predicate column. It queries the catalog views sys.stats and sys.stats_columns to return the database object name, the column name, and the statistics name for all of the columns that have single-column statistics.

When the query optimizer creates statistics on single columns as a result of using the AUTO_CREATE_STATISTICS option, the statistics name starts with _WA.

SELECT OBJECT_NAME(s.object_id) AS object_name,

       COL_NAME(sc.object_id, sc.column_id) AS column_name,

       s.name AS statistics_name

FROM   sys.stats s

       INNER JOIN sys.stats_columns sc ON s.stats_id = sc.stats_id

                                          AND s.object_id = sc.object_id

WHERE s.name like '_WA%'

ORDER BY s.name;
 

GO

Now we are going to solve this query that I mentioned in the top of my article.

"When we are going to update the Statistics by sp_updatestats, after or before rebuilding the query"

 

To solve this query we must take some example of t-sql statements.

 

Step-1 [ Create Table with primary key ]

CREATE TABLE tbl_stat

(

  ID             INT,

  SNAME          CHAR(2000)

  CONSTRAINT PK_tbl_stat PRIMARY KEY CLUSTERED (ID)

)

 

Step-2 [ Now Insert Some records in the Table Objects ]

BEGIN

      DECLARE @RowID DECIMAL=0

      WHILE @RowID <> 15000

        BEGIN

             SET @RowID = @RowID + 1

             INSERT INTO tbl_stat(ID, SNAME)

             SELECT @RowID, 'Student-' + CONVERT(VARCHAR(5), @RowID)

        END

END

 

GO

 

SELECT * FROM tbl_stat

 

Step-3 [ Now see the statistics look like for primary key ]

DBCC SHOW_STATISTICS ('tbl_stat', PK_tbl_stat) WITH STAT_HEADER

Name

PK_tbl_stat

Updated

NULL

Rows

NULL

Rows Sampled

NULL

Steps

NULL

Density

NULL

Average key length

NULL

String Index

NULL

Filter Expression

NULL

Unfiltered Rows

NULL

 

Step-4 [Let's rebuild the index to start with ]

ALTER INDEX PK_tbl_stat ON tbl_stat REBUILD

Step-5 [ Now See the Statistics Again ]

DBCC SHOW_STATISTICS ('tbl_stat', PK_tbl_stat) WITH STAT_HEADER

Name

PK_tbl_stat

Updated

Apr 19 2012 12:09AM

Rows

15000

Rows Sampled

15000

Steps

3

Density

1

Average key length

4

String Index

NO

Filter Expression

NULL

Unfiltered Rows

15000

 

 

Step-6 [ Conclusion ]

Here in this example we find that the rebuilding the index automatically update the statistics.

But what about the statistics that are not the part of the Index.

 

Step-7 [ Create Non-Index Statistics ]

CREATE STATISTICS stats_NIndxData ON tbl_stat(SNAME)

Step-8 [ Observation ]

DBCC SHOW_STATISTICS ('tbl_stat', PK_tbl_stat) WITH STAT_HEADER

n  No Changes Found

 

 

Rebuild the Index Again

 

ALTER INDEX PK_tbl_stat ON tbl_stat REBUILD

 

Now check

DBCC SHOW_STATISTICS ('tbl_stat', PK_tbl_stat) WITH STAT_HEADER

n  No Changes Found

 

Step-9 [ Now run the sp_updatestats ]

EXEC sp_updatestats

DBCC SHOW_STATISTICS ('tbl_stat', stats_NIndxData) WITH STAT_HEADER

 

Name

PK_tbl_stat

Updated

Apr 19 2012 12:18AM

Rows

15000

Rows Sampled

7068

Steps

15

Density

1

Average key length

2000

String Index

YES

Filter Expression

NULL

Unfiltered Rows

15000

 

Step-8 [ Remarks ]

So the answer of the query is "You must run the sp_updatestats after rebuilding the index.

 

Hope you like it.

 

Posted by: MR. JOYDEEP DAS

  

 

 

 

 

Wednesday 18 April 2012

Move the User define Database

 

This article contains "To move the User define database in new location by attach and detach functions in SQL Server"   

Sometimes it is necessary to move the database and transaction log in different locations to increase the performance. As transaction log is growing quickly we have to make some strategy to get the solutions by moving it in a separate drive or separate disk.

One of my readers gives me a note related to it, and I want to share it with all of you.

"Shrinking of a database or log file causes file-fragmentation on the hard disk, this leads to poor performance. Therefore:


1. The transaction log should consist of two files: One which cannot expand itself and has the size you normally need between two full backups (plus some safety).
2. You create a second file which expands automatically.
If a transaction log grows too much only the second file will grow. Next day after full backup you just delete the second file and create a new one."

 

To move the existing database to new Location

Important

Before moving any database we must take the backup of the database.

Syntax

sp_detach_db [ @dbname= ] 'database_name'

            [ , [ @skipchecks= ] 'skipchecks' ]

            [ , [ @keepfulltextindexfile = ] 'KeepFulltextIndexFile' ]

 

sp_attach_db [ @dbname = ] 'dbname' 
                     , 
[ @filename1 = ] 'filename_n' [ ,...16 ]

How to Move

 

Step-1 [ Detach the Database ]

USE MASTER

GO

SP_DETACH_DB 'MY_DATABASE'

GO

 

Step-2 [ Copy the Data File and Log File from Current Location to New Location ]

 

Just copy the File in New Location. For Example Copy the file from E:\myData\SQLData to F:\MyData\NewLocation

 

Step-3 [Re-Attach the Database from New Location ]

 

USE MASTER

GO

 

SP_ATTACH_DB 'MY_DATABASE',

             'F:\MYDATA\NEWLOCATION\MY_DATABASE.MDF',

             'F:\MYDATA\NEWLOCATION\MY_DATABASE.LDF'

 

GO

 

Step-4 [Verify the Changes ]

USE MY_DATABASE

GO

SP_HELPFILE

GO

 

Hope you like it.

 

Posted by: MR. JOYDEEP DAS

 

sp_tables

 

It returns a list of objects that can appears in the FORM cluse. It unable to returns the synonym objects. To view the details related to synonym just go to the sys.synonyms system catalog views.

The syntax of the sp_tables is mentioned bellow.

sp_tables [ [ @table_name= ] 'name' ]

     [ , [ @table_owner= ] 'owner' ]

     [ , [ @table_qualifier= ] 'qualifier' ]

     [ , [ @table_type= ] "type" ]

     [ , [@fUsePattern = ] 'fUsePattern'];

 

Now we are discussing about each of the parameters

1.    [ @table_name = ]
Table name is used to return the catalog information. Supported nvarchar(384). Default value is NULL. Wildcard character can be used.

2.    [ @table_owner = ]
Owner of the table is used to return the catalog information. Supported nvarchar(384). Default is NULL.

3.    [ @table_qualifier = ]
It is the name of the table qualifier. In others RDBMS supports qualifier.owner.name. In SQL server it is the database name.

4.    [ @table_type] = ]
This can be includes TABLE, SYSTEMTABLE or VIEWS. The Default is NULL. It is the list of values separated by a comma, which gives the information about.

5.     [ @fUserPattern = ]
Determines whether the underscore ( _ ), percent ( % ), and bracket ( [ or ] ) characters are interpreted as wildcard characters. Valid values are 0 (pattern matching is off) and 1 (pattern matching is on). fUsePattern is
bit, with a default of 1.

Example:

 

USE my_DB;

GO

EXEC sp_tables

     @table_name = '%',

     @table_owner = 'student',

     @table_qualifier = 'my_DB';

GO

 

Return Result set:

 

Columns

Data Type

Descriptions

TABLE_QUALIFIER

sysname

In SQL Server it contains the Database name.

TABLE_WONER

sysname

Name of the Database owner who create the table.

TABLE_NAME

sysname

The name of the Table.

TABLE_TYPE

Varchar(32)

Table, System Table or Views

REMARKS

Varchar(254)

No Value returned

 

 

Hope you like it.

 

 

Posted by: MR. JOYDEEP DAS

 

 

Sunday 15 April 2012

LOG Shipping


In my previous post, I am giving a short note related to Log Shipping. Some of my reader asks me to write some details related to it. So, in this article I am trying to summarize the concept of Log Shipping and with this I provide a hands on configurations related to it. Hope you enjoy this article.

What is that?

The log shipping is the process of automating the backup of database and transaction log file from one server and restoring them to another server. The Enterprise Edition of SQL Server Only supports the Log Shipping.

Benefits of Log Shipping

It gives us the disaster recovery solutions for a single primary database and one or more secondary database. Each of that is the separate instance of the SQL server.
In the interval between restore jobs the second database supports read-only access.
Allows user-specified delay between when the primary server backup the log of the primary database and when secondary server must restore the log backup.

Terminology

Before starting the log shipping we have to understand some terminology mentioned bellow.

Primary Server

The instance of the SQL Server that is your production server.

Primary Database

The database of the primary server that you want to backup to another server.

Secondary Server

The standby copy of the primary database. The second database may be in either RECOVERING state or the STANDBY state, which leaves the database available for limited read-only access.

Monitor Server

An optional instance of the SQL server, that tracks all of the details of log shipping.

Backup Job

The SQL Server Agent job that performs the backup operations.

Copy Job

A SQL server agent job that copies the backup file from primary server to destination on the secondary server and log history on secondary server and monitor server.

Restore Job

A SQL Server Agent job that restore the copied backup file to the secondary database. It logs history on the local server and monitor server, and deletes old files and old history information.

Alert Job

A SQL Server Agent jobs that raises alerts from primary and secondary database when backup and restore operations does not complete successfully within as specified threshold.

Log Shipping Operations

It consists of three operations
1.     Backup the transaction log at the primary server instance.
2.     Copy the transactions files into secondary server instance (may be multiple).
3.    Restore the log backup on the secondary server instance (may be multiple).

The following figure illustrates the log shipping configurations.



The deployment process of Log shipping is mentioned bellow

We can enable the log shipping by these simple following steps.

1.    First choose the Server for Primary, Secondary and Optional Monitor server.
2.    SQL Server 2008 and letter version supports the Backup compressions. When configuring the Log Shipping configuration we can control he backup compression behaviour.
3.    Create file share for the Transaction log backup. We have to prepare a server that is not the part of log shipping configurations. To maximize the availability  of the primary server MS recommends that the we must put the backup share on the separate host computer.
4.    Choose the Backup schedule for the Primary database Server.
5.    Create the Folder of the secondary server into which the transaction log backup will be copied.
6.    Configure the One or More Secondary server and Secondary Database.
7.    Optionally configure the monitor server.
8.    You must have the sysadmin on each server instance to enable the log shipping.

Steps to configure Log Shipping
1.    The database must be Full or Bulk-logged recovery model.

SELECT name, recovery_model_desc
FROM sys.databases WHERE name = 'My_DB'

USE [master]
GO
ALTER DATABASE [My_DB] SET RECOVERY FULL WITH NO_WAIT
GO

2.    In the SQL server management studio right click the selected database and select the properties. Then select the Transaction Log Shipping page. Then check the "Enable this as primary database in a log shipping  configurations".



3.    The next is to configure and schedule a transaction log backup by clicking "Backup Settings…"
If you are creating backups on a network share enter the network path or for the local machine you can specify the local folder path. The backup compression feature was introduced in SQL Server 2008 Enterprise edition. While configuring log shipping, we can control the backup compression behavior of log backups by specifying the compression option. When this step is completed it will create the backup job on the Primary Server.




4.     In this step we will configure the secondary instance and database. Click on the Add… button to configure the Secondary Server instance and database. You can add multiple servers if you want to setup one to many server log-shipping.


Initialize Secondary Database tab

In this step you can specify how to create the data on the secondary server. We have three options: create a backup and restore it, use an existing backup and restore or do nothing because you have manually restored the database and have put it into the correct state to receive additional backups.



Copy Files Tab

In this tab you have to specify the path of the Destination Shared Folder where the Log Shipping Copy job will copy the T-Log backup files. This step will create the Copy job on the secondary server.


Restore Transaction Log Tab

Here we have to specify the database restoring state information and restore schedule. This will create the restore job on the secondary server.



5.     In this step we will configure Log Shipping Monitoring which will notify us in case of any failure. Please note Log Shipping monitoring configuration is optional. Click on Settings… button which will take you to the "Log Shipping Monitor Settings" screen. Click on Connect …button to setup a monitor server. Monitoring can be done from the source server, target server or a separate SQL Server instance. We can configure alerts on source / destination server if respective jobs fail. Lastly we can also configure how long job history records are retained in the MSDB database. Please note that you cannot add a monitor instance once log shipping is configured.
                                  
















6.    Click OK to finish it.

Hope you like it.




Posted by: MR.JOYDEEP DAS