Friday 31 August 2012

Rebuild Master:

Step 1: Insert your setup media CD or folder where you have dumped SQL Server Installation
Step 2: Open command prompt
Step 3: Go to location (wherever you have setup, Step 1)
Step 4: setup /ACTION=REBUILDDATABASE /QUIET /INSTANCENAME=MSSQLServer /SQLSYSADMINACCOUNTS=MyAdmin /SAPWD=somePwd

Step 5: this will pop up new command window and then disappears

Step 6: you will have to refer logs so as to see whether it was successful completed or exit with error

This is it, you system database has been rebuild successfully!!
Restoring Master Database - An alternative method:
Step 1: Take backup of master database, in your case, I assume, you already have your master database backed up.

Step 2: Stop SQL Server Services * you will need Administrator permission for this operation

Step 3: Start SQL Server using -C and -M parameter; this will start SQL Server in single user mode which is required to restore Master database

Step 4: Restore database using SQLCMD

This is it, you have successfully restore your master database!!!

Is it this simple? No, wait, there are few things we need to be careful with....

The above example have save you because below conditions are true:
  1. You have back up of your master database, or
  2. This server is fresh installation or
  3. there are no user databases or
  4. this server is not production box
If the above conditions were not true, you will have to perform below steps:
  1. Restore / Rebuild system databases
  2. Recreate all the Login that you have on your crashed server - backing up master db helps you here
  3. you will have to recreate all the jobs and SSIS/DTS Packages that you have - backing up MSDB database will save you
  4. similarly distribution database backup will help you if you have replication configured
Conclusion :  I wish that you have schedule maintenance plan / job to take full backup of your system databases i.e. Master, MSDB, and Distribution(if you have configured replication).
Disclaimer : The example used here is for illustration purpose only.

Thursday 30 August 2012

compatibility level change
 
CREATE DATABASE statement and then issues “sp_dbcmptlevel” to set the database compatibility level to 80 as shown below.
CREATE DATABASE [test] ON PRIMARY
( NAME = N’test’, FILENAME = N’Z:\SQLData\test.mdf’ , SIZE = 2048KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N’test_log’, FILENAME = N’Y:\SQLData\test3_log.ldf’ , SIZE = 3072KB , FILEGROWTH = 10%)
GO
EXEC dbo.sp_dbcmptlevel @dbname=N’test’, @new_cmptlevel=80
GO

Changing Compatibility Level
So once you have identified the compatibility level of your database and know what you want to change it to you can use the sp_dbcmptlevel system stored procedure to make the change.  The command has the following syntax:
sp_dbcmptlevel [ [ @dbname = ] name ]
[ , [ @new_cmptlevel = ] version ]
–to change to level 80
dbo.sp_dbcmptlevel @dbname=N’test’, @new_cmptlevel=80
–to change to level 90
dbo.sp_dbcmptlevel @dbname=N’test’, @new_cmptlevel=90

–or
sp_dbcmptlevel ‘test’, ’80′

sp_dbcmptlevel ‘test’, ’90′

Differences
There are several differences on how compatibly levels affect your database operations.  SQL Server Books Online has a list of these differences and the following list shows you a few of these items:
Compatibility level setting of 80 or earlier Compatibility level setting of 90 Possibility of impact
For locking hints in the FROM clause, the WITH keyword is always optional. With some exceptions, table hints are supported in the FROM clause only when the hints are specified with the WITH keyword. For more information, see FROM (Transact-SQL). High
The *= and =* operators for outer join are supported with a warning message. These operators are not supported; the OUTER JOIN keyword should be used. High
SET XACT_ABORT OFF is allowed inside a trigger. SET XACT_ABORT OFF is not allowed inside a trigger. Medium
(Source: SQL Server 2005 Books Online)  For a complete list of these items look here:
In addition, each new compatibility level offers a new list of reserved keywords.  Here is a list of the new keywords for SQL Server 2005.
Compatibility level setting Reserved keywords
90 PIVOT, UNPIVOT, REVERT, TABLESAMPLE
80 COLLATE, FUNCTION, OPENXML
70 BACKUP, CONTAINS, CONTAINSTABLE, DENY, FREETEXT, FREETEXTTABLE, PERCENT, RESTORE, ROWGUIDCOL, TOP
65 AUTHORIZATION, CASCADE, CROSS, DISTRIBUTED, ESCAPE, FULL, INNER, JOIN, LEFT, OUTER, PRIVILEGES, RESTRICT, RIGHT, SCHEMA, WORK
(Source: SQL Server 2005 Books Online)
If one of these keywords is being used and your database is set to this compatibly level the commands will fail.  To get around this you could put the keyword in either square brackets ([ ]) or use quotation marks (” “) such as [PIVOT] or “PIVOT”.

T-SQL Code to Backup System Databases

Database Administrators can use the below T-SQL code to backup all of the system databases to SystemDatabaseBackups folder on the E drive. You will need to change this path for your systems.
This is a simple script that includes code for each database to be backed up:
USE master
GO
SELECT GETDATE() AS 'System Database Backup Start Time'
GO
/* Backup Distribution Database */ 
BACKUP DATABASE Distribution 
TO DISK = 'E:\SystemDatabaseBackups\Distribution.BAK' 
WITH INIT
GO
/* Backup ReportServer Database */ 
BACKUP DATABASE ReportServer 
TO DISK = 'E:\SystemDatabaseBackups\ReportServer.BAK' 
WITH INIT
GO
/* Backup ReportServerTempDB Database */ 
BACKUP DATABASE ReportServerTempDB 
TO DISK = 'E:\SystemDatabaseBackups\ReportServerTempDB.BAK' 
WITH INIT
GO
/* Backup Master Model */ 
BACKUP DATABASE Model 
TO DISK = 'E:\SystemDatabaseBackups\Model.BAK' 
WITH INIT
GO
/* Backup Master Database */ 
BACKUP DATABASE Master 
TO DISK = 'E:\SystemDatabaseBackups\Master.BAK' 
WITH INIT
GO
/* Backup Master MSDB */ 
BACKUP DATABASE MSDB 
TO DISK = 'E:\SystemDatabaseBackups\MSDB.BAK' 
WITH INIT
GO
/* Copy Resource Database Files Using XP_CMDSHELL */ 
EXEC xp_cmdshell 'COPY /Y "D:\Program Files\Microsoft SQL Server\MSSQL10.
SQL2008\MSSQL\Binn\mssqlsystemresource.mdf" "E:\SystemDatabaseBackups"' 
GO
EXEC xp_cmdshell 'COPY /Y "D:\Program Files\Microsoft SQL Server\MSSQL10.
SQL2008\MSSQL\Binn\mssqlsystemresource.ldf" "E:\SystemDatabaseBackups"'
GO
SELECT GETDATE() AS 'System Database Backup End Time'
GO

Tuesday 28 August 2012

Restore Database From SQL Server 2008 to SQL Server 2005 Part 1 – 3

PART 1:

Problem

When you restore or attach a database which is created from SQL Server 2008 to SQL Server 2005 or SQL Server 2000, you will see some error messages as the examples below.

Backup and Restore

You have backup a database from SQL Server 2008. If you try to restore the backup database file to SQL Server 2005, you will receive the error message:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Additional information:
-> The media family on device ‘the backup file‘ is incorrectly formed. SQL Server cannot process this media family.
RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241)

Restore Database Error Message

Detach and Attach

You have detach a database from SQL Server 2008. If you try to attach the detached database file to SQL Server 2005, you will receive the error message:
Attach database failed for Server ‘SQL Server name’. (Microsoft.SqlServer.Smo)
Additional information:
-> An exception occurred while executing a Transact-SQL statement batch. (Microsoft.SqlServer.ConnectionInfo)
–> The database ‘database name’ cannot be opened because it is version 655. This server supports version 611 and earlier. A downgrade path is not supported.
Could not open new database ‘database name’. CREATE DATABASE is aborted. (Microsoft SQL Server, Error: 948)

Attach Database Error Message

Solution

These problems occur because a backup or detach database file is not backward compatible. You cannot restore or attach a database which is created from a higher version of SQL Server to a lower version of SQL Server.
But there are some alternatives which can help you to restore a database to a lower version of SQL Server. I divide into separate parts.
  1. Part 2: Generate SQL Server Scripts Wizard. The solution creates a SQL Server script file using a wizard. Then, you simply execute the script file on SQL Server 2005 or SQL Server 2000. So you will get everything as same as the source database on the destination. But there are some disadvantages:
    • If the source database contains lots of data, you will have a large script file.
    • The generated file is a plain text. Anyone who has access to the file can read it. So you should delete the script file after the restoration.
  2. Part 3: Import and Export Wizard. This solution exports data to the targeted SQL Server using a wizard. It is more secure and effective than the first solution. But you can only export tables and views only.
————————————————————————————————————————————————————————————————————————————————————————————

PART 2: Generate SQL Server Scripts Wizard

Now let’s see a first solution to solve the problems.
On this post, you see how to backup ‘Northwind’ database by generate a SQL Server script on SQL Server 2008. Then, restore the ‘Northwind’ database by execute the SQL Server script on SQL Server 2005.

Step-by-step

  1. On Microsoft SQL Server Management Studio, connects to the SQL Server 2008. Right-click on the database that you want to backup and select Tasks -> Generate Scripts.
    Generate Scripts
  2. On Welcome to the Generate SQL Server Scripts Wizard, click Next.
    Welcome to the Generate SQL Server Scripts Wizard
  3. On Select Database, select Northwind and check Script all objects in the selected database. Then, click Next.
    Select Database
  4. On Choose Script Options, set Script Database Create to False and Script for Server Version to SQL Server 2005.
    Note: You can set Script Database Create to True if your source and destination for store database files are the same location.
    Script for Server Version to SQL Server 2005
  5. Continue on Choose Script Options, scroll down and set Script Data to True. Click Next
    Note: Set this option to true to include data on each table to a script.
    Script Data to True
  6. On Output Option, select a destination for the output script. Select Script to file and browse to the location that you want. Click Next.
    Select Script Output Destination
  7. On Script Wizard Summary, you can review your selections. Then, click Finish.
    Script Wizard Summary
  8. On Generate Script Progress, the wizard is creating a SQL Server script.
    Generate Script Progress
  9. When the script has been completed, you see the output file as similar the figure below.
    The Generated Script
  10. Connect to SQL Server 2005, create a new database. Right-click Database -> New Database.
    Note: If you have set Script Database Create to True on step 4, you don’t have to create a database manually.
    Create New Database
  11. Type ‘Northwind’ as database name. Click OK.
    Create Database Northwind
  12. Execute the SQL Server script file that you have created.
    Execute the script
  13. Now the database ‘Northwind’ is restored on SQL Server 2005.
    Northwind Database

———————————————————————————————————————————————————————————-

PART 3: Export Data Wizard

On this post, you see how to export tables on ‘Northwind’ database from SQL Server 2008 to SQL Server 2005 using export data wizard.

Step-by-step

  1. On Microsoft SQL Server Management Studio, connects to SQL Server 2008. Right-click on the database that you want to export data -> select Tasks -> Export Data.
    Export Data
  2. On Welcome to SQL Server Import and Export Wizard, click Next.
    Welcome to SQL Server Import and Export Wizard
  3. On Choose a Data Source, select the source from which to copy data. Set Data source to SQL Server Native Client 10.0. Verify that Server name is the source of SQL Server 2008 that you want and select Database as ‘Northwind’. Click Next.
    Choose a Data Source
  4. On Choose a Destination, specify where to copy data to. Set Destination to SQL Server Native Client 10.0. Type the Server name to the destination of SQL Server 2005 that you want. You can also click Refresh to verify if you can connect to the specify server name. Currently, I don’t have ‘Northwind’ database on SQL Server 2005 so I will create a new one, click New.
    Choose a Destination
  5. On Create Database, type name as ‘Northwind’ and click OK.
    Create Database
  6. Back to Choose a Destination, I have created ‘Northwind’ database so select it as Database. Click Next.
    Choose a Destination
  7. On Specify Table Copy or Query, select Copy data from one or more tables or views and click Next.
    Copy data from one or more tables or views
  8. On Select Sources Tables and Views, select tables that you want to export. On this example, I select all tables on ‘Northwind’ database.
    Select Sources Tables and Views
  9. On Save and Run Package, click Next.
    Save and Run Package
  10. On Complete the Wizard, you can verify the choices made in the wizard. Then, click Finish.
    Complete the Wizard
  11. Wait until the wizard finishes execution.
    Export Data Successful
  12. Now I have exported tables of ‘Northwind’ database from SQL Server 2008 to SQL Server 2005 successfully.
    The Exported Tables

Max Memory Settings for SQL Server 2005/2008

It is pretty important to make sure you set the Max Server memory setting for SQL Server 2005/2008 to something besides the default setting (which allows SQL Server to use as much memory as it wants, subject to signals from the operating system that it is under memory pressure). This is especially important with larger, busier systems that may be under memory pressure.
This setting controls how much memory can be used by the SQL Server Buffer Pool.  If you don’t set an upper limit for this value, other parts of SQL Server, and the operating system can be starved for memory, which can cause instability and performance problems. It is even more important to set this correctly if you have “Lock Pages in Memory” enabled for the SQL Server service account (which I always do for x64 systems with more than 4GB of memory).
These settings are for x64, on a dedicated database server, only running the DB engine, (which is the ideal situation).
Physical RAM                        MaxServerMem Setting
2GB                                           1500
4GB                                           3200
6GB                                           4800
8GB                                           6400
12GB                                         10000
16GB                                         13500
24GB                                         21500
32GB                                         29000
48GB                                         44000
64GB                                         60000
72GB                                         68000
96GB                                         92000
128GB                                       124000
If you are running other SQL Server components, such as SSIS or Full Text Search, you will want to allocate less memory for the SQL Server Buffer Pool. You also want to pay close attention to how much memory is still available in Task Manager. This is how much RAM should be available in Task Manager while you are under load (on Windows Server 2003):
Physical RAM            Target Avail RAM in Task Manager
< 4GB                               512MB – 1GB
4-32GB                              1GB – 2GB
32-128GB                            2GB – 4GB
> 128GB                              > 4GB
You can use T-SQL to set your MaxServerMemory setting. The sample below sets it to 3500, which is the equivalent of 3.5GB. This setting is dynamic in SQL Server 2005/2008, which means that you can change it and it goes into effect immediately, without restarting SQL Server.
–Turn on advanced options
EXEC sp_configure ‘Show Advanced Options’ ,1;
GO
RECONFIGURE;
GO
–Set max server memory
EXEC sp_configure ‘max server memory (MB)’ ,10000;
GO
RECONFIGURE;
GO
–See what the current values are
EXEC sp_configure;
You can also change this setting in the SSMS GUI, as you see below:

Wednesday 22 August 2012

SQL server  Silent Installation


Start /wait E:\software\SQL_Server_2005_Express\SQL_2005_Express.exe /qn INSTANCENAME="SQL2K5" ADDLOCAL="All" SQLACCOUNT="ad*\administrator" SQLPASSWORD="S3****" AGTACCOUNT="ads\administrator" AGTPASSWORD="S3*****" SQLBROWSERACCOUNT="ad*\administrator" SQLBROWSERPASSWORD="S3****"

Granting Permissions to SSIS – ‘Access is Denied’ Error

 

When a user without sufficient rights attempts to connect to an instance of Integration Services on a remote server, the server responds with an "Access is denied" error message. You can avoid this error message by ensuring that users have the required DCOM permissions.
  To configure rights for remote users on Windows Server 2003 or Windows XP
  1. If the user is not a member of the local Administrators group, add the user to the Distributed COM Users group. You can do this in the Computer Management MMC snap-in accessed from the Administrative Tools menu.
  2. Open Control Panel, double-click Administrative Tools, and then double-click Component Services to start the Component Services MMC snap-in.
  3. Expand the Component Services node in the left pane of the console. Expand the Computers node, expand My Computer, and then click the DCOM Config node.
  4. Select the DCOM Config node, and then select MsDtsServer in the list of applications that can be configured.
  5. Right-click on MsDtsServer and select Properties.
  6. In the MsDtsServer Properties dialog box, select the Security tab.
  7. Under Launch and Activation Permissions, select Customize, then click Edit to open the Launch Permission dialog box.
  8. In the Launch Permission dialog box, add or delete users, and assign the appropriate permissions to the appropriate users and groups. The available permissions are Local Launch, Remote Launch, Local Activation, and Remote Activation. The Launch rights grant or deny permission to start and stop the service; the Activation rights grant or deny permission to connect to the service.
  9. Click OK to close the dialog box.
  10. Under Access Permissions, repeat steps 7 and 8 to assign the appropriate permissions to the appropriate users and groups.
  11. Close the MMC snap-in.
  12. Restart the Integration Services service.

 

How do I .... grant access permissions for SSIS to users?

 SSIS access permissions are managed via Component Services. It is the DCOM object named MSDTSServer. Here are the steps to follow. 

  1. Component Services -> Computers -> My Computer -> DCOM Config -> MSDTSServer
  2. Right click on MSDTSServer
  3. Click properties
  4. Click on the Security tab
  5. Select Customize and add the users/groups to the Launch/Activation and Access tabs accordingly
  6. Add the user to the local Distributed COM Users group
  7. Restart the SSIS service
  8. Methos -2 
  9. 1. Right-click My Computer>Manage>Configuration>Local Users & Groups
  10. 2. Look for the Distributed COM Users group and add the account.
     

 

Wednesday 8 August 2012

indexes in sql server

By using index: it will traverse through the index tree structure to find the required data and extract the data that satisfy the query criteria.

Indexes: indexes in sql server are similar to index in a book. Indexes are used to improve the performance of quires.
Indexes are generally created for following columns.
1). Primary key column
2). Foreign key column: Frequently used in join conditions
3). Column which are frequently used in where clause
4). Columns which are used to retrieve the data in sorting order
Indexes are cannot be created for following columns
The columns which are not frequently used in where clause.
Columns containing the duplicate and NULL values.
Columns containing images, binary information, and text information.
Types of indexes
1). Clustered index
2). Non-clustered index
3). Unique index
4). Composite index
1). Clustered index: Only one clustered index is allowed for a table. The order of values in a table. Order of values in index is also same. When cluster index is created on table data is arranged in ascending order cluster index will occupy 5% of the table.
Syntax:
Create clustered index <index_name> an <table_name> (columns)
Create clustered index emp_clindex an emp (empno)
2). Non-clustered index:
It is the default index created by the server the physical order of the data in the table is different from the order of the values in index.
Max no of non – clustered indexes allowed for table is 249.
Syntax:
Create non-clustered index <index_name> on table_name <columns>
Create non-clustered index emp_sal on emp (deptno, sal)

3). Unique index:
An index with unique constraint. It will not allow duplicate values.
Syntax:
Create unique index <index name> on <tablename> (column)
Create unique index dept_index on dept (dname)

Tuesday 7 August 2012

------How to track database file growth over a period of time?
USE master
GO

ALTER PROC sp_track_db_growth
(
@dbnameParam sysname = NULL
)
AS
BEGIN

/***********************************************************************************************************
                                                 
Purpose:    To calulate the file growth percentages for a given database and to show you the rate at which
        your databases are growing, so that you can plan ahead for your future storage needs.


Usage:        Run this script in the master database to create the stored procedure. Once it is created,
        you could run it from any of your user databases. If the first parameter (database name) is
        not specified, the procedure will use the current database.

        Example 1:
        To see the file growth information of the current database:

        EXEC sp_track_db_growth

        Example 2:
        To see the file growth information for pubs database:
   
        EXEC sp_track_db_growth 'pubs'

***********************************************************************************************************/

DECLARE @dbname sysname

/* Work with current database if a database name is not specified */

SET @dbname = COALESCE(@dbnameParam, DB_NAME())

SELECT    CONVERT(char, backup_start_date, 111) AS [Date], --yyyy/mm/dd format
    CONVERT(char, backup_start_date, 108) AS [Time],
    @dbname AS [Database Name], [filegroup_name] AS [Filegroup Name], logical_name AS [Logical Filename],
    physical_name AS [Physical Filename], CONVERT(numeric(9,2),file_size/1048576) AS [File Size (MB)],
    Growth AS [Growth Percentage (%)]
FROM
(
    SELECT    b.backup_start_date, a.backup_set_id, a.file_size, a.logical_name, a.[filegroup_name], a.physical_name,
        (
            SELECT    CONVERT(numeric(5,2),((a.file_size * 100.00)/i1.file_size)-100)
            FROM    msdb.dbo.backupfile i1
            WHERE     i1.backup_set_id =
                        (
                            SELECT    MAX(i2.backup_set_id)
                            FROM    msdb.dbo.backupfile i2 JOIN msdb.dbo.backupset i3
                                ON i2.backup_set_id = i3.backup_set_id
                            WHERE    i2.backup_set_id < a.backup_set_id AND
                                i2.file_type='D' AND
                                i3.database_name = @dbname AND
                                i2.logical_name = a.logical_name AND
                                i2.logical_name = i1.logical_name AND
                                i3.type = 'D'
                        ) AND
                i1.file_type = 'D'
        ) AS Growth
    FROM    msdb.dbo.backupfile a JOIN msdb.dbo.backupset b
        ON a.backup_set_id = b.backup_set_id
    WHERE    b.database_name = @dbname AND
        a.file_type = 'D' AND
        b.type = 'D'
       
) as Derived
WHERE (Growth <> 0.0) OR (Growth IS NULL)
ORDER BY logical_name, [Date]

END

----To see the file growth information of the current database:
EXEC sp_track_db_growth
GO
 ----SQL Server keeps logs for each deleted record. You can query these logs via the fn_dblog SQL Server function.

SELECT [RowLog Contents 0]
FROM   sys.fn_dblog(NULL, NULL)
WHERE 
       AllocUnitName = 'dbo.TableName'       
   AND Context IN ( 'LCX_MARK_AS_GHOST', 'LCX_HEAP' )       
   AND Operation in ( 'LOP_DELETE_ROWS' )  
;

Friday 3 August 2012

Questions and Answers

How to check Database Statistics are optimized for Performance ?

What is Statistics in SQL Server ?
Statistics helps Query Optimizer to makes a decision to choose the fast good (best) way to execute a given query. Statistics decides, whether to choose  a particular index or not and build the execution plan.
So if your stats are not updated then Optimizer might not choose the best plan and may cause overheads in query execution. So this becomes very important that SQL Server should always have the updated Statistics.

Does that means a DBA or Developer has to manually create Statistics like indexes ?
The answer is NO, SQL Server has ability to create and update the stats automatically but a DBA and developer always has an option to create/update stats manually too.
If this a case, then
How can I tell SQL Server to create and update the Statistics of their own ?
These are the two database options, which a DBA can set for any given database. You can change the Create and Update Statistics option either by running following statement or by using SSMS in database property, as show in picture
USE [master]
GO
ALTER DATABASE [TEST] SET AUTO_CREATE_STATISTICS ON WITH NO_WAIT
GO
ALTER DATABASE [TEST] SET AUTO_UPDATE_STATISTICS ON WITH NO_WAIT
GO
To open a Database property page,  Connect to database Server, Do to databases, Select User Database and select Properties option from the the menu which comes after right click on the database name.


Is it recommended not to enable AUTO Statistics Option ?
Updating statistics is important, either you do it manually or you let SQL Server do it of their own.  Answer to this question is quite tricky, it’s all depends on environment and the changes, which are you making to table.
My general recommendation, would be
  1. Have a dedicated schedule to update statistics for highly updated tables
  2. and I would also enable a AUTO update statics too.
Now some of my friends says, Auto  statistics might degrade the performance, if SQL Server starts rebuilding them during business hours.
To answer this concern, let’s understand,
When SQL Server update the statistics automatically ?
SQL Server update the statistics automatically only if there have been more than 20% of records (rows) have been updated from the last update stats.
To be very precise, this number is 20% of total records + 500 rows. For example, if you have one million (1 0,00,000)  rows in a table then minimum 200500 (20% of records + 500 fixed ) records must have been updated to let the SQL server to update the statistics of their own.
Where I can check, what all statistics are residing in my database?
How can I check, how many statistics are being created on my database or database objects ?
SQL Server maintains all statistics information in a system catalog named  sys.stats . We can directly query this catalog.
select * from sys.stats
How can I check, when my Statistics were last updated ?
Following TSQL will help you to get last updated data and time when the Statistics were las updated
-- Check Statistics for a particular Table and a index
dbcc show_statistics ('<User Table Name>', '<Index Name on User Table>')

-- Check Statistics for a particular Table
sp_helpstats <User Table Name>

--- Check Statistics information for a whole user database for all user tables
select a.id as 'ObjectID', object_name(a.id) as 'TableName', isnull(a.name,'Heap') as 'IndexName', b.name as 'TableName',
stats_date (id,indid) as stats_last_updated_time
from sys.sysindexes as a
inner join sys.objects as b
on a.id = b.object_id
where b.type = 'U'
I got my stats info, my statistics are old and now
How can update Statistics manually ?
We can update stats at object level or update stats for the entire database.
-- This will update the statistics for all indexes on the SalesOrderDetail table
USE AdventureWorks2008R2
GO
UPDATE STATISTICS Sales.SalesOrderDetail;
GO

-- This will update the statistics for the AK_SalesOrderDetail_rowguid index of the SalesOrderDetail table
USE AdventureWorks2008R2;
GO
UPDATE STATISTICS Sales.SalesOrderDetail AK_SalesOrderDetail_rowguid;
GO
The following code, will update all statistics in a current database
-- sp_updatestats to update all statistics for the database.
EXEC sp_updatestats
 
How can I create Statistics manually  ?
Yes, we can also create manually statistics in SQL Server. The following code will creates the ContactMail2 statistics for all rows in the BusinessEntityID and EmailAddress columns of the Person table.
--  create statistics on user table
CREATE STATISTICS NamePurchase
    ON AdventureWorks2008R2.Person.Person (BusinessEntityID, EmailPromotion)
    WITH FULLSCAN,
Is there any limitation on number of stats, which we can create on a particular table ?
Yes, SQL Server can have maximum
  • 2000 Statistics per table in SQL Server 2005
  • 10,000 Statistics per table in SQL Server 2008
  • 30,000 Statistics per table in SQL Server 2008 R2
How to check Database Statistics are optimized for Performance or not ?
This is a bit tricky question where we need to analyze the execution plan of query and check Planned estimates V/s Actual values. Just to make it easy just check only two values as shown below
Un OPTIMISED Value in Execution Plan, where we have estimated rows as 1 and actual number of Rows returned is 1001, where there is great difference planned and actual rows, so here we definitely need to dig more in Statistics
image
OPTIMISED Value in Execution Plan, where we have estimated rows as 1.9 and actual Rows is 1, which very actual value is near to estimated value. That means, stats were updated and Query optimizer has built a fast good execution plan.
SQL Server Statistics
What are the best practices related to Statistics in SQL Server?
  • Enable a Auto Create Statistics option for a database
  • Create a schedule to update Statistics
  • Enable update Statistics  on
  • Do manually verify state of your statistics  as shown above.
 ----------------------------------------------------------------/
What is default port for SQL Server ? OR
On which port SQL Server Database Services Runs ? OR
What is standard / default port for SQL Server ? OR
The default SQL Server port is 1433, this means, if you installing a default instance on your Server then this default port will be used by SQL Services.
But in case you are installing SQL Server Named Instance then dynamic port will be used by SQL Services. This Dynamic port is subject to change every time you restart SQL Services.

How to check, SQL Services PORT ? OR How to find SQL Server Port ? OR
How to check which port is being used SQL Services ?
There are various ways to check SQL Server Services PORT.
OPTION 1
SQL Services Port can be check via SQL Server Configuration Manager.
START >> Programs >> Microsoft SQL Server >> Configuration Tools >> SQL Server Configuration Manager. Then select SQL Server Network Configuration >>> Protocols for MSSQLSERVER or Protocols for <<NAMED Instance Name>>. Right Click n Properties of TCP / IP and IP ADDRESSES TAB, as shown in image below.


OPTION 2 – SQL Server Error Logs

Check SQL Server Error log, which located at install directory of SQL Server, in my case this is located at
“C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log”


you can also open SQL Server Error log in SSMS also, like this


OPTION 3 – CHECK Windows EVENT LOGS
START >>> RUN >>> EVENTVWR >>> WINDOWS LOGS >>> Application >>> look for event when the SQL Server has started


Can we change SQL Server Services Default Port ?


YES, we can change SQL Services Port using SQL Server Configuration Manager and specify the port in TCP IP Properties.


NOTEThis change requires restart of SQL Services.

What is SQL Server Browser service  and why should be enable them ?

SQL Server Browser service allow users to connect to SQL Server instances that are not running on port 1433, without knowing the port number.
For example, if you have SQL Server named instance which is running on default configuration, which is Dynamic port, then SQL Services choose different port on every restart. In this case, we can let SQL Server Browser services running and we can connect to connect to named instances [<Server Name>\<Instance Name>] without supplying any port information as SQL Server Browser services will internally redirect named instance connection request to actual port on which SQL named instance is running.
Exception – When your Named Instance is running on port 1433 the there is no need to supply the named instance name [<ServerName>\<Instance Name>], instead of this just supply the [<Server Name>].
Port used by SQL Server Browser Services (sql server browser port) ?
SQL Server Browser service runs on UDP port 1434
Which PORT should be opened in firewall for SQL Server Connectivity ?
We should open a TCP port in firewall on which SQL Services is actually running that means
  • open TCP 1433 is SQL Services are installed as default instance and using default port of SQL Services, else
  • open TCP PORT on which the SQL Server Services are running, read other question, how to check on which port SQL Services are running, RECOMMENDED else
  • let SER SERVER browser services to run in Server as an automatic mode and open UDP port 1434 in firewall, this will allow users to connect to SQL Server instances that are not running on port 1433, without knowing the port number
How to specify port number in SQL Server Connection String ?
Data Source=192.168.136.1,1533;Network Library=DBMSSOCN;Initial Catalog=myDataBase;User ID=myUsername;Password=myPassword;
where , 1533 was port for my named instance and DBMSSOCN=TCP/IP. This is how to use TCP/IP instead of Named Pipes. values in red, will change as per your requirement.
Which Port is being used by SQL Server Analysis Services ?
Default instance of Analysis Services listens on TCP port 2383 this means, if you installing a default instance of Analysis Services on your Server then this default port will be used by SQL Analysis Services.
But in case you are installing SQL Server Analysis Services as Named Instance then by default a dynamic port will be used by SQL Services. This Dynamic port is subject to change every time you restart SQL Services.
Can I Change SQL Server Analysis Services Port ?
YES, we can change default port of analysis services. Please check my previous port  




Backing Up an Analysis Services Database

How to Backup Analysis Services Database ?
Backing up Analysis Services database, lets administrators save a particular state of a SQL Server Analysis
Services database and its objects.
We can take Analysis Services Database Backup in various ways
  1. Manually, graphically using SQL Server Management Studio
  2. Manually, using a XMLA command / script
  3. Schedule a Analysis services backup as SQL Server scheduled Job.
Here we are learning option One, Manually, graphically using SQL Server Management Studio

STEP By STEP procedure for backing up Analysis Services Database.
STEP 1 - Connect to Analysis Services Instance on which you want to take backup.


STEP 2 - Once you are connected to Analysis Services in SSMS, in Object Explorer, right-click the
OLAP database (DBATAG in my case), and then click Back Up.




STEP 3 - In Backup Window, General page, Choose Browse Option to specify the backup location path


STEP 4 - In Save File As Box, specify the location, where you want to create a backup file. Once specified, Click OK. Ensure that file directory exists and you have write permissions in that directory.


STEP 5 - Backup is in progress, depends upon the size of the database.


STEP 6 - Once the backup is complete, you can verity the backup file in specified folder, you can see a file with extension .abf (Analysis backup file).


Note : there is no option available for verification like relation engine services.

Introduction to Indexes




Good indexes are the key to good performance in SQL Server and the key to creating good indexes is to understand what indexes are and how SQL Server uses them to evaluate queries.
In this first part of a three part series, I’m going to look at the very basics of what indexes are, what types exist in SQL and how they’re used.

What is an index?

An index is a structure within SQL that is used to quickly locate specific rows within a table. It can be useful to imaging an index at the back of a textbook when thinking about SQL indexes. They both serve the same purpose – to find specific information quickly.

General Structure

An index is defined on one or more columns, called key columns. The key columns (also referred to as the index key) can be likened to the terms listed in a book index. They are the values that the index will be used to search for. As with the index found at the back of a text book (see figure 1), the index is sorted by the key columns.
Index sample
Figure 1: Book index.             Image copyright Simple Talk publishing.
If an index is created with more than one key column, it is known as a composite index.
The general structure of an index is that of a balanced tree (b-tree). The index will have a single root page, zero or more intermediate levels and then a leaf level. A page is an 8 kilobyte chunk of the data file, with a header and footer and is identified by a combination of File ID and Page number.
Index Structure
Figure 2: Index Structure
Note: Commonly the root page is shown at the top of the tree diagram and the leaf pages at the bottom. Think of it as an inverted tree.
In the leaf level, there’s one entry for each row in the index1. The entries in the index are ordered logically2 in the order of the index key.
The non-leaf levels of the index contain one row per page of the level below, referencing the lowest index key value on each page.  If all of those rows fit onto a single page, then that page is considered the root and the index is only two levels deep. If all of those rows will not fit on a single page, then one (or more) intermediate levels are added to the index.
The number of levels in an index is referred to as the depth of the index. This is an important consideration for evaluating the efficiency of the index. The index illustrated in figure 2 has a depth of 3.
(1)   With the exception of SQL 2008’s filtered indexes, an index will have the same number of rows at the leaf level as the table.
(2)   I’m using the phrase ‘logically ordered’ because the index does not necessarily define the physical storage of the rows. The rows are stored in a way that SQL can retrieve them ordered.

Clustered and nonclustered

There are two main types of indexes in SQL Server, the clustered index and the nonclustered index
Clustered indexes define the logical order of the table. The leaf level of the clustered index has the actual data pages of the table. Because of this there can only be one clustered index per table. A table that does not have a clustered index is referred to as a heap.
Nonclustered indexes are separate from the table. The leaf level of a nonclustered index has a pointer as part of each index row. That pointer is either the clustered index key in the cases where the base table has a clustered index or the RID (Row Identifier) in the cases where the table is a heap. The RID is an 8-byte structure comprised of File ID, Page Number and Slot Index and will uniquely identify a row in the underlying heap. Either way, the each row of a nonclustered index has a reference to the complete data row.

Index Limits

There are a number of built-in limitations on indexes
Key size
The size of an index key is limited to a maximum of 900 bytes and a maximum of 16 columns. This is definitely a limit, not a goal, as the larger the index key gets, the more pages in the index and the deeper the index tree. As the number of pages and the depth of the tree increases so the index becomes less efficient to use. Larger indexes also use more storage space and result in less efficient use of SQL’s data cache.
Number of indexes
In SQL 2005 and earlier there was a limitation of 250 indexes per table, one clustered and 249 non-clustered. In SQL 2008, with the addition of filtered indexes, that limitation was increased to 1000, one clustered and 999 non-clustered indexes.
Both of these limits are very high and there are few circumstances where a well-designed system should approach that limit.
The reason for this is twofold.
·        As the number of indexes increases so the total size occupied by the table (with all of its indexes) increases. Sure, hard drives are cheap and storage is abundant but increasing the size of a database has other effects, Maintenance operations (backups, restores, consistency checks and index rebuilds) all take longer as the size of a database increases.
·        Indexes have to be kept up to date as data changes and the more indexes there are on a table, the more places the data has to be changed. If there are 10 nonclustered indexes on a table, an insert must be done in 11 places (the table and each of those nonclustered indexes). On databases that are mostly read-only (decision support, data warehouses) that may be acceptable. On databases that have frequent inserts, updates and deletes (OLTP systems), the overhead impose by multiple indexes may not be acceptable

How SQL uses indexes

If a table does not have index, the only way to find all occurrences of a value within a table is to read the entire table. If a table has an index, it speeds up the locating of values within that index in two ways.
1.      The index is sorted in the order of the key columns. This means that once all the matching values have been found, the remaining portion of the table can be ignored. This is the same as a telephone directory, where once all entries with a particular surname have been found, the rest of the book can be ignored as no further matches are possible
2.      The tree structure of the index allows a divide-and-conquer approach to locating rows, where large portions of the table can be quickly excluded from the search. This is illustrated in Figure 3
There are four basic operations that SQL can do on an index. It can scan the index, it can seek on the index, it can do lookups to the index and it can update the index

Scans

An index scan is a complete read of all of the leaf pages in the index. When an index scan is done on the clustered index, it’s a table scan in all but name.
When an index scan is done by the query processor, it is always a full read of all of the leaf pages in the index, regardless of whether all of the rows are returned. It is never a partial scan.
A scan does not only involve reading the leaf levels of the index, the higher level pages are also read as part of the index scan.

Seeks

An index seek is an operation where SQL uses the b-tree structure to locate either a specific value or the beginning of a range of value. For an index seek to be possible, there must be a SARGable3 predicate specified in the query and a matching (or partially matching) index. A matching index is one where the query predicate used a left-based subset of the index columns. This will be examined in much greater detail in a part 3 of this series.
The seek operation is evaluated starting at the root page. Using the rows in the root page, the query processor will locate which page in the next lower level of the index contains the 1st row that is being searched for. It will then read that page. If that is the leaf level of the index, the seek ends there. If it is not the leaf then the query processor again identifies which page in the next lower level contains the specified value. This process continues until the leaf level is reached.
Once the query processor has located the leaf page containing either the specified key value or the beginning of the specified range of key values then it reads along the leaf pages until all rows that match the predicate have been returned. Figure 2 shows how a seek would be done on an index when searching for the value 4.
Index Seek
If the index contains all the columns that the query needs, then the index is said to be covering for that query. If the index does not contain all the columns then SQL will do a lookup to the base table to fetch the other columns in order to process the query.
(3)   SARGable is a made-up word, constructed from the phrase Search ARGument. It refers to a predicate that is of a form that SQL can use for an index seek. For more details see: http://www.sql-server-performance.com/tips/t_sql_where_p2.aspx

Lookups

Lookups occur when SQL uses an index to locate rows affected by a query but that index does not contain all the columns required to satisfy the query, that is the index is not covering for that query. To fetch the remaining columns, SQL does a lookup to either the clustered index or heap.
A lookup to a clustered index is essentially a single row clustered index seek and it is always a single row seek. So if a lookup is needed for 500 rows, that involves 500 individual clustered index seeks.

Updates

Anytime that a row is changed, those changes must be made not only in the base table (clustered index or heap) but also in any index that contains the columns that were affected by the change.  This applies to insert, update and delete operations.

Considerations for creating indexes

I’ll be going into more detail on considerations for indexes in the next two parts, but in general:
  • Clustered index should be narrow, because the clustering key is part of all nonclustered indexes.
  • Composite nonclustered indexes are generally more useful than single column indexes, unless all queries against the table filter on one column at a time.
  • Indexes should be no wider than they have to be. Too many columns wastes space and increases the amount of places that data must be changed when an insert/update/delete occurs.
  • If an index is unique, specify that it is unique. The optimiser can sometimes use that information to generate more optimal execution plans.
  • Be careful of creating lots of indexes on frequently modified tables as it can slow down data modifications.
In part 2 of this series I’ll be looking in more detail into clustered indexes. What they are, how they differ from nonclustered indexes and what the considerations are for creating a clustered index.

Good indexes are the key to good performance in SQL Server and the key to creating good indexes is to understand what indexes are and how SQL Server uses them to evaluate queries.
In the previous article I looked at the very basics of what indexes are, what types exist in SQL and how they’re used. In this article I’m going to take a closer look at clustered indexes, what makes them different from nonclustered indexes, how SQL uses clustered indexes and some of the recommendations for selecting the clustered index key.

What is a clustered index?

A clustered index is an index where the leaf level of the index contains the actual data rows of the table. Like any other index, a clustered index is defined on one or more columns – the index key. The key columns for a clustered index are often referred to as the clustering key.
The clustered index can be looked at as a balanced tree (b-tree) structure built on top of a table, with the rows in the table stored logically in the order of the clustering key (see figure 1). The clustered index essentially is the table and hence there cannot be more than one on a table.
Architecture of a Clustered Index
Figure 1: Architecture of a clustered index
The clustering key is, in some ways, the ‘address’ of a data row. It can be used, through the tree structure of the clustered index, to locate a row of data. Because it is used to locate a data row, the clustering key must be unique. If, when the clustered index is created it is defined as unique (either by specifying the UNIQUE keyword as part of the create index statement or by defining the clustered index as part of a primary key or unique constraint), then everything is fine. What if the clustered index is not defined as unique? In that case, SQL automatically adds another column to the index, a 4-byte integer column that’s called the Uniquifier. This column is hidden, it is not displayed in the table design and it cannot be queried directly.

Difference between a clustered index and a heap

A table that has no clustered index is referred to as a heap. Whereas a clustered index has the data stored logically in the order of the index key, a heap has no ordering of rows or pages.
When a row is added to a table with a clustered index, that row has a defined page that it must go on to, according to the value of the clustered index key. Using the index depicted in Figure 1 as an example, if a new row is added with a value of 4 for the clustered index key, that row must go onto the second leaf page. If that page is full, a new page gets allocated, joined into the index and some of the rows from the full page are moved to the new one. This is called a page split, it can be an expensive operation and it leads to fragmentation.
When a row is added to a table without a clustered index (a heap) there’s no specific place that the row has to go. A heap has no defined order. The row will be added wherever there’s a space large enough to put the row.

How a clustered index is used

There are three ways that a clustered index can be used by a query to locate – seek, scan and lookup.

Seek

For the clustered index to be used in a seek operation, the query must contain a SARGable1 predicate based on the clustering key or part of the clustering key.
A simple example of a clustered index seek is as follows (based on the AdventureWorks database)
 SELECT ProductID, ProductNumber, Name
  FROM Production.Product
  WHERE ProductID = 380
ProductID is the clustering key for the table Production.Product and the filter on ProductID is SARGable.
Execution Plan
(1)   SARGable is a made-up word, constructed from the phrase Search ARGument. It refers to a predicate that is of a form that SQL can use for an index seek. For more details see: http://www.sql-server-performance.com/tips/t_sql_where_p2.aspx

Scan

A scan of the clustered index is equivalent to a table scan. It’s a read of all of the data pages in the table
Because the clustered index is the table it can be seen as the default access path for queries to get data to retrieve rows. If a query does not have a SARGable predicate1 based on the clustering key  and there are no non-clustered indexes which are appropriate for a query (and what makes a non-clustered index appropriate will be discussed in detail in part 3), then a scan of the clustered index will be done to retrieve the data for the query. This is a full scan of all the data pages, i.e. a table scan.
In this example a clustered index scan is done because there are no indexes on either of the columns in the where clause.
 SELECT ProductID, ProductNumber, Name
  FROM Production.Product
  WHEREColor = 'Blue' AND DaysToManufacture BETWEEN 2 AND 4
Execution Plan
SARGable is a made-up word, constructed from the phrase Search ARGument. It refers to a predicate that is of a form that SQL can use for an index seek. For more details see: http://www.sql-server-performance.com/tips/t_sql_where_p2.aspx

Lookups

A lookup occurs when a nonclustered index was used to locate rows for a query, but the nonclustered index did not contain all of the columns required for the query. To fetch the remaining columns, a lookup is done to the clustered index.
A lookup is equivalent to a single row clustered index seek. Lookups are always done one row at a time. For this reason, they are very expensive operations, especially when lots of rows are involved.
SELECT ProductID, ProductNumber, Name
  FROM production.Product
  WHERE ProductNumber = 'HN-1224'
Execution Plan

Considerations for selecting the clustering key


There are two main schools of thought in what makes a good clustering. One school says to put the clustered index on the column or set of columns that would be most useful for queries, either frequently run queries or ones doing queries of large ranges of data. The other school says to use the clustered index primarily to organise the table and leave data access to the nonclustered indexes.
I hold to the second school, so the rest of this article will be written from that perspective.
There are four main attributes that are desirable for a clustering key. The clustering key should be:
  • Narrow
  • Unique
  • Unchanging
  • Ever increasing

Narrow

The width of the clustering key affects the depth of the clustered index and hence it’s efficiency. A clustered index with a very deep b-tree requires queries to read more intermediate pages to locate rows. This increased page reads makes deeper clustered indexes less efficient for data access, especially for lookups. Additionally, more pages means more space used on disk and more space used in memory when the index is in the data cache
The width of the clustering key does not, however, only affect the clustered index. The clustering key, being the rows’ address, is located in every single nonclustered index. Hence a wide clustering key increases the size of all nonclustered indexes, reducing their efficiency as well.

Unique

The clustered index has to be unique. It’s used as the address for a row. If the clustered index is not defined as unique, SQL makes it unique by adding a hidden 4 byte integer column. This makes the index wider than it needs to be and makes the nonclustered indexes wider than they should be.
This attribute needs to be considered in relation to the others. If making the cluster unique requires adding several wide columns then it may be better to keep the index narrower and leave it as not unique.

Unchanging

The clustering key defines where a row is found, which page it is in. If the value of the clustering key is changed, the row must be moved from the page where it currently is to a new location. This means that the update is essentially a delete-insert combination.

Ever-increasing

An ever-increasing column is one where every value inserted is higher than all values currently in the table. This is important for a clustered index as, if inserts occurred all over the table, there would be a high number of page splits and the index would become rapidly fragmented.
Since the clustered index is the largest index on a table (as it contains the entire data row) and because it tends to be used for scans more than other indexes, the clustered index is the one that is affected the most when fragmented and the one that, generally, is the most expensive to rebuild.
For this reason, an ever-increasing clustering key is usually a good choice, especially for tables that are subject to frequent inserts.

3)

What is a nonclustered index?

A nonclustered index is the second type of index in SQL Server. They have the same b-tree structure as the clustered index (see previous article). Unlike the clustered index nonclustered indexes does not contain the entire data row at the leaf level. Rather, the nonclustered index contains just the columns defined in the index, and a pointer to the actual data row. See figure 1 for a high-level architecture. Assume that the nonclustered index depicted there is based off the clustered index depicted in the previous part.
Nonclustered index image.
Figure 1: Nonclustered index structure
When the underlying table is a heap (has no clustered index) then the pointer to the data row is the RID, an 8 byte structure comprised of the File ID, Page No and Slot index, i.e. the actual location of the row within the data file.
When the underlying table has a clustered index, the pointer to the actual data row is the clustering key. As mentioned in the previous article, this has important considerations for the selection of a clustering key.
There can be multiple nonclustered indexes on a table. In SQL 2005 and earlier, there was a limit of 249 nonclustered index per table. In SQL 2008, with the introduction of filtered indexes, the limit on indexes has been increased to 999.

Include columns

Columns specified as include columns are stored at the leaf level of the nonclustered index, but not at the intermediate or root levels. They are not part of the index key and do not count towards the 16 column/900 byte limit on indexes. Any data type other than the old LOB types (TEXT, NTEXT, IMAGE) are allowed for include columns, although columns defined as varbinary(MAX) Filestream are not permitted as include columns. The ability to specify include columns was added in SQL 2005.
Include columns are useful in that they are columns available in the index but not contributing to the size of the index key hence they make it easier to create covering indexes (see next section) than could be done without them.
Typically columns that appear only in the select clause of a query and not within the WHERE, FROM or GROUP BY are candidates for INCLUDE columns.
If LOB columns are specified as include columns, the entire contents of the LOB columns are duplicated. It’s not just a pointer to the existing LOB pages added to the nonclustered index.
The ability to specify include columns was added in SQL 2005.

Covering indexes

Covering is not a property of an index, it is not possible to say, without additional information that a query is covering or not covering. Instead, covering deals with an index and a query together.
An index is said to be covering for a specific query if that index contains within it all of the columns necessary for the query. The columns may be part of the key or they may be include columns. This means that a query that has a covering index can be evaluated completely from that index, without needing to go to the base table (heap or cluster) to fetch additional columns.
It has been said before that creating a covering index for a query is just about the best way to speed a query up. This is true. It is not always possible or desirable to cover every query. Covering every query may lead to unacceptably large indexes or unacceptably large numbers of indexes with all the attendant problems (as mentioned in part 1)
It is possible (and often desirable) for an index to be covering for more than one query.

Filtered indexes

Filtered indexes are a new feature in SQL 2008 and they allow for an index to contain only some of the rows in the table. Prior to this, an index would always have, at the leaf level, the same number of rows as the table. With filtered indexes, an index can be based on just a subset of the rows.
When creating a filtered index, a predicate is specified as part of the index creation statement. There are several limitations on this predicate. The comparison cannot reference a computed column, a column that is declared as a user-defined type, a spatial column or a hierarchyid column.
A clustered index cannot be filtered as it is the actual table.

How a nonclustered index is used

Seek

For SQL to do a seek on a nonclustered index, the query must have a SARGable1 predicate referencing the index key or a left-based subset of the index key. In addition to this, the index must be either covering or return sufficiently small number of rows that the required lookups to the clustered index/heap (to retrieve the remainder of the columns) is not considered too expensive by the optimiser. If the required lookups are considered too expensive then the index will not be used.
It usually works out that the number of rows where the optimiser decides that key/RID lookups are too expensive is somewhere around 0.5%-1% of the total rows in the table.
(1) - SARGable is a made-up word, constructed from the phrase Search ARGument. It refers to a predicate that is of a form that SQL can use for an index seek. For more details see: http://www.sql-server-performance.com/tips/t_sql_where_p2.aspx

Scan

An index scan is a read of all of the leaf pages in the nonclustered index. A scan of a nonclustered index is generally more efficient than a scan of the clustered index, because nonclustered indexes are generally smaller.
A nonclustered index scan usually indicates that the index contains all the columns required by the query but they are in the wrong order, the query predicates are not SARGable or there are no query predicates.

Update

When a change is made to a column that is either a key column or an include column of a nonclustered index, that index will be modified as part of the insert statement. Indexes are never left out of sync with the underlying table data.
Inserts and deletes will affect all nonclustered indexes on a table.

Considerations for selecting nonclustered indexes

The decision as to what columns should be indexed should be based on the queries that are run against the table. There’s no point in indexing a column that is never used in a query.

Selectivity

In general, a nonclustered index should be selective. That is, the values in the column should be fairly unique and queries that filter on it should return small portions of the table.
The reason for this is that key/RID lookups are expensive operations and if a nonclustered index is to be used to evaluate a query it needs to be covering or sufficiently selective that the costs of the lookups aren’t deemed to be too high.
If SQL considers the index (or the subset of the index keys that the query would be seeking  on) insufficiently selective then it is very likely that the index will be ignored and the query executed as a clustered index (table) scan.
It is important to note that this does not just apply to the leading column. There are scenarios where a very unselective column can be used as the leading column, with the other columns in the index making it selective enough to be used.

Single column vs. Multi-column indexes

In general, wider nonclustered indexes are more useful than single column nonclustered indexes. This is because it is very unusual for SQL to use multiple nonclustered indexes on the same table to evaluate a query. An index defined over more than one column is referred to as a composite index.
Consider a hypothetical table (Table1) that has three indexes on it, one on Col1, one on Col2 and one on Col3. If a query with predicates on all three of those columns is executed against that table, SQL might seek on all three indexes and intersect the results, it might seek on two, intersect the results and do lookups for the column needed for the last filter and apply that filter, it might seek on one, do lookups for the other two columns and then filter them, it might just scan the cluster/heap. All of these ae possible options, but they are not the most optimal option.
If there was a single index defined with all three columns, the matching rows could be located as a single seek operation.
three indexes in a query plan.
Figure 2: Three separate indexes
Execution plan details.
Figure 3: One composite index
For a more details analysis of single vs composite indexes see http://sqlinthewild.co.za/index.php/2010/09/14/one-wide-index-or-multiple-narrow-indexes/

Column order

The order of the columns in an index key should be chosen based on several factors: the selectivity of the columns, what % of queries will filter on that column and whether the queries will filter with equality or inequality matches.
If the query predicate is based on multiple columns and there is an index where the combination of columns forms a left-based subset of the index key, then retrieving the rows is a single seek operation. If the query predicate is based on multiple columns and there is an index that contains those columns as part of the index key, but they do not all form a left-based subset of the index key, then retrieving those rows is a 2-step process; seek on the columns that do form a left-based subset and then filter out rows that don’t match the other conditions.
Let’s look at a quick example.
Imagine we have a hypothetical table named Table1 and that there is an index on that table with the index key consisting of three columns – Col1, Col2 and Col3.
 CREATE INDEX idx_Table1_Demo
      ON Table1 (Col1, Col2, Col3)

Let’s say further that there are two queries against that table.
SELECT Col1 FROM Table1 WHERE Col1 = @Var1 AND Col2 = @Var2
This query can be evaluated by a single seek operation against the two columns. 
Execution plan details
SELECT Col1 FROM Table1 WHERE Col1 = @Var1 AND Col3 = @Var3
This query cannot be evaluated by a single seek operation against the two columns.  Rather it has to be done as a seek just on one column, Col1, and the rows that are returned from that seek get checked to see if they match the second predicate or not.
Execution Plan details
The other thing that has to be considered is the type of predicates that the queries will be using.
It is often said that the most selective column should be made the leading column of an index. This is true, but must be considered in light of the other factors. There is little point in making a column the leading column of an index if only 5% of the queries that use that index filter on that column. It would mean that only those queries could seek on the index and the other 95% would either scan or use a different index (if one existed)

In conclusion

This concludes the introductory coverage of indexes. I hope that this has been of use to people. For further and deeper information there are a few posts on my blog, also see the Inside SQL Server 2005 books, specifically the 1st book (The Storage Engine) and the 4th book (Query tuning and optimisation) and SQL Server 2008 Query Performing Distilled.