Tuesday 21 May 2013

Memory Management in SQL Server

Introduction: I have heard people asking me how SQL Server manages memory. The answer to this was very difficult with SQL Server earlier versions. And with SQL Server 2005 it has been simpler to explain and understand. SQL Server by default installation manages memory dynamically and has settings that it can well negotiate with the operating system for the need of memory. Let us look into the depth of the SQL Server memory management. Before moving into it, let us look the important terms associated with SQL Server memory.  
 
BUFFER CACHE: Caching is a process of storing data from the disk in memory. SQL Server stores the data that it has read from the disk in the buffer cache so that the next time the same data is read it does not need an IO cycle to process and instead it can get the data from the cache. When the data is in the cache and request succeed then it is a CACHE HIT and when it fails it is a CACHE MISS. The BUFFER CACHE HIT ratio is calculated with these terms. FREE LIST: SQL Server internally manages a data structure called free lists which has the address of all the pages that are not allocated to any of the data and index pages in the buffer cache. When SQL Server reads data from the memory the data is placed on the first free page that is available in the list. Each page contains page header information that stores the data about the page and its internals.  
CHECKPOINT PROCESS: This process checks for the modified pages in the buffer cache and moves it to the disk so that reading dirty pages can be avoided. The checkpoint process is governed by two parameters. 1. Checkpoint interval 2. recovery interval While the checkpoint interval directly affects the interval between two check points, the recovery interval determines the checkpoint in basis of how fast the database should be recovered in case of a system restart or failure. The SQL Server dynamically calculates the checkpoint interval depending upon the recovery interval parameter. Also it should be noted that having a checkpoint interval of low value will affect the server performance causing a slow down while having a high value can make the database to data loss in some cases. The following events happen when checkpoint occurs a. Writing all dirty data from buffer to disk b. Writing all active transaction to the transaction log c. Storing checkpoint records in the log Normally checkpoints occur for the following events in the database a. When a checkpoint command is issued manually b. When server is SHUTDOWN c. When you use ALTER DATABASE command to modify the data files d. When you issue a backup command e. When the recovery model is changed f. When you use BULK COPY
 
 LAZY WRITER PROCESS: Lazy writer process also does the same as checkpoint process i.e. writing the modified pages to the disk but in addition the LAZY WRITER process checks for the amount of free space in the buffer cache and if it finds to be a value below the range that it should be then this process cleans up the pages that are being least used so that the free list in the buffer cache is always above the value required. Now having known these important terms about the SQL Server memory internals let us go and look onto how SQL Server manages memory within itself. With SQL Server 2005 memory allocation is dynamic even when AWE is enabled. The only exception to this is having SQL Server 2005 in 32-bit operating system with AWE enabled which has a static memory allocation means that SQL Server grabs memory and does not release it to the other processes. By default SQL Server manages memory dynamically in which case only necessary memory is allocated when SQL Server starts up and acquires more memory as and when needed. When it does not need the extra memory SQL Server deallocates to the memory to be used by other applications. This dynamic setting can be overridden by setting the SQL Server to a configured value so that it only uses that much amount of memory.  
 
Setting MIN AND MAX MEMORY FOR SQL SERVER: You need to use the sp_configure procedure to configure the memory for SQL Server. Since these configuration parameters are advanced you need to have the option enabled using 
 
1
2
Sp_configure 'show advanced options',1
GO
Then you can set the minimum and maximum memory using
1
2
3
4
5
6
Sp_configure 'MIN SERVER MEMORY', (memory VALUE IN MB)
RECONFIGURE
GO
Sp_configure 'MAX SERVER MEMORY', (memory VALUE IN MB)
RECONFIGURE
GO
To enable AWE option use
1
2
3
Sp_configure 'awe enabled' ,1
RECONFIGURE
GO
This setting requires SQL Server to be restarted to takes effect. Microsoft recommends allowing SQL Server to handle memory dynamically. But yet these settings can be useful when you run multiple instances in a single box allocating fixed memory for each instance. The given table below is the amount of memory that each edition of SQL Server can use.  
SQL Server EDITION MAXIMUM SIUPPORTED 32-bit MAXIMUM SIUPPORTED 64-bit
Express 1 GB Not applicable
Workgroup 3 GB Not applicable
Standard OS MAX 32 TB
Developer OS MAX 32 TB
Enterprise OS MAX OS MAX
If you feel that your server has more then 4 GB of RAM and SQL Server uses only little check to see if you have /3GB (Click Here to get the code) switch added in the boot.ini file in the operating system and in SQL Server if AWE is enabled. Hope you would have had a happy reading about SQL Server memory management.

Read more: http://sql-articles.com/articles/performance-tunning/memory-management-in-sql-server/#ixzz2TuOBFIte

Indexes

  Indexes are used to speedup data access in the database. By using them one can quickly find the data in a table without having to read all the data from the tables. The index structure resembles an inverse tree similar to a directory structure. This tree begins with the 1st page of an index which is the root node which contains pointers to other pages in the index. Then comes the intermediate node or branch node which also contains pointers to leaf nodes or other branch nodes. The leaf node is the lowest level page in an index and it contains either an Row identifier (RowId) that points to the actual data in a table or it may even contain a clustering key itself. The following B-Tree (Balanced Tree) is a sample of an index page containing 1000 records and how they are separated into Root node, Branch node and Leaf node. Indexes_01 Leaf node Consider the above B-Tree structure and we are searching a field with value of 978. First the database engine would check the root node value of 1 to 1000. Since the value is 978 it uses the branch node 501 to 1000 and navigates further to reach 751 to 1000. Finally it would navigate further through the leaf node 876 to 1000 and find the desired record 978. Thus by using the index there were only fewer number of reads. When compared to scanning all the 1000 records to fetch the result, this usage of index has reduced the IO to a large extent. Clustered Index: A clustered index stores the table data at the leaf page of the index based on clustering key. Because the clustered index stores the data in sorted order there is no need to rearrange the page in the index. There can be only one clustered index in a table since the data is sorted. Consider the analogy of a dictionary where the words are sorted alphabetically and definitions appear next to the word. Similarly in a clustered index CI, the leaf page contains the entire data/records and is sorted. NonClustered Index: A nonclustered index is analogous to an index in the back of a book. We can use the books index to locate pages that match an index entry. There can be 249 nonclustered index NCI and 1 clustered index in any table. If table does not have a clustered index, then its unsorted and is called a HEAP. A NC created on a Heap contains pointers to table rows. Each entry in the index page contains a row ID (RID). The RID is a pointer to a table row in a heap. If the table has clustered index, the index pages of a NCI contain the CI keys rather than RIDs. An index pointer whether it is a RID or a CI key is called a lookup. Scan and Seek: The following are the operators related to indexes. They are available in the SQL Server query execution plans. Table scan – A table scan results in reading the entire datas in a table and returns the entire table or specific records. This is bad for performance, if the table has numerous records doing a table scan will affect the performance severely. In some cases if there are fewer records its fine to have table scan. So if you see that SQL Server has performed a Table Scan, take a note of how many rows are in the table. If there arent many, then in this case, a Table Scan is a good thing. In the below query in adventureworks database we dont have an index on customerid field and it results in table scan. Also when a Select * is done in a table it will fetch the entire results and hence will do a table scan.
1
2
SELECT * FROM sales.storecontact
SELECT * FROM sales.storecontact WHERE customerid=322
Indexes_02 Clustered Index scan This is nothing but tables scan in a table which has Clustered index. Since the clustered index leaf page contains the data itself, performing a clustered index scan will scan all the entire records which will affect performance. But as I mentioned earlier if the records are fewer it wouldnt affect much. For the below query the optimizer does an Clustered index scan to retrieve the records,
1
SELECT * FROM sales.storecontact WHERE contacttypeid=15
Indexes_03 Clustered index seek A seek will retrieve only selective records from a table when compared to scan which will traverse all the records in a table. If the seek operation is done using a clustered index its a clustered index seek. Basically, any seek operation will vertically traverse through the B-Tree and fetch the records. Consider the below query for which I created a clustered index on Customerid field. The optimizer uses the clustered index and performs a clustered index seek.
1
SELECT * FROM sales.storecontact WHERE customerid=322
Indexes_04 The clustered index seek will traverse through the records where the customerid=322 and fetch the output. When compared to table scan which will traverse through all the records, an index seek is very helpful in reading the number of records quickly and is good for performance. Index scan Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table. An index scan is nothing but a scan on the nonclustered index. When index scan happens, all the rows in the leaf level are scanned. Index seek An index seek uses the nonclustered index to seek the records in a table and is considered better for performance if there is high selectivity. For the below query the optimizer does an index seek using the NC index on contacted field. Since the NC covers only the contactid it will not be able to fetch all the records with an index seek alone. So its uses seek to fetch the records which have contactid=322 and then does a key lookup using the clustered index key to fetch the other fields records.
1
SELECT * FROM sales.storecontact WHERE contactid=322
Indexes_05 The key lookup is an expensive operation if there are numerous records. Since key lookup increases as the IO we might have to avoid it in some cases. Index with included columns can help to overcome this situation and cover the entire query and in turn

Query tuning steps

 Most of the DBAs new to query tuning would wonder where to start in order to tune a query. I hope this article would probably guide them in understanding the steps to begin with. Step 1: Run the query in Management Studio and view the actual execution plan. To view the execution plan, press Ctrl+M and then execute the query in SSMS. Step 2: Check if there are any table scans or Clustered index scan or Index scan involved in the execution plan. If yes, then you should analyze that tables info thoroughly in the execution. Step 3: Identify the actual rows in the table where there is scan involved. If the table is slightly larger i.e. greater than 2000 rows I would suggest you to check if there are proper indexes in the table. If the table has less than 2000 records table scan wouldnt be a problem and I would rather prefer a table scan on those tables. Step 4: If there is already an index you have to analyze why the optimizer preferred a Clustered index scan or an Index scan rather than Seeks. The reason may be due to fragmentation or outdated statistics or due to the least selectivity or the query cost. Step 5: The following query will give the exact % of fragmentation in the indexes for a particular table. The below query will display the fragmentation status in the table Person.Address in Adventureworks database.

1
2
3
SELECT CAST(DB_NAME(database_id) AS VARCHAR(20)) AS [DATABASE Name],
CAST(OBJECT_NAME(OBJECT_ID) AS VARCHAR(20)) AS [TABLE NAME], Index_id, Index_type_desc, Avg_fragmentation_in_percent, Avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),OBJECT_ID('person.address'),NULL,NULL,'Detailed')
If the avg_fragmentation_in_percent is > 40% rebuild the index (using Alter index rebuild command) to eliminate fragmentation. Its recommended to have a rebuild index job for all the tables scheduled to run on a weekly basis. Please NOTE that rebuilding an index is an expensive operation and ensure that its done only during OFF-Production hours. Step 6: If the indexes are fine, then check the statistics. Sometimes the index will be fine but the query would still continue to be slow since the optimizer wouldnt be able to use the correct indexes due to outdated statistics. The following query gives the last time when the statistics for an index was last updated.

1
2
3
SELECT Name AS Stats_Name, STATS_DATE(OBJECT_ID, stats_id) AS Statistics_update_date
FROM sys.stats
WHERE OBJECT_ID=OBJECT_ID('person.address')

The statistics should be updated either weekly or daily or on alternate days depending on the frequency of modifications in the table. The more frequent the table is modified the more frequent the statistics should be updated. Sometimes for high transactional tables you can schedule a job to update the statistics on a regular basis. Please NOTE that rebuilding the index will automatically update the statistics as well. Hence avoid updating the statistics if you are rebuilding the index. Step 7: If you see any key lookups happening in the execution plan, make use of Included columns to create a covering Nonclustered index to avoid expensive lookup operation. This will help in improving the query performance as the logical reads would be reduced to a great extent. Step8: Ensure that each table has a clustered index preferably on primary key columns (by default there is one unless you explicitly mention Nonclustered) or on Identity columns. The clustered index should always be defined on unique valued columns like primary keys or identity. Step9: If you have a composite index, ensure to have the most selective field (the ones which have unique values) as the leading column in the index. Step10: If you couldnt tune the query further or if you are clueless, try to use Database Tuning Advisor (DTA). Provide the SQL query as input file and run the DTA. It will provide a list of recommendations to reduce the query cost. Please do NOT blindly implement the suggestions doing so would certainly improve the query performance but you would end up creating numerous indexes which will be difficult to maintain during maintenance operations. You have to take the call of creating indexes as suggested by DTA, check whether the index will be used in most cases or if you can rewrite the query to make use of the existing indexes. Step11: While tuning stored procedures you need to ensure that the query plan for stored procedures is cached. The following query will help in providing the caching info for the stored procedures.

1
2
3
4
5
6
SELECT usecounts, cacheobjtype, objtype, [TEXT]
FROM sys.dm_exec_cached_plans P
CROSS APPLY sys.dm_exec_sql_text(plan_handle) S
WHERE cacheobjtype = 'Compiled Plan' AND objtype='Proc'
AND [TEXT] NOT LIKE '%dm_exec_cached_plans%'
AND S.DBID=DB_ID('dbname')

The value of usecounts will increase every time you run the same stored procedure.If there is a problem in caching check if there is any SET options as most of them will cause a recompile in query plan. Also the plan will be flushed out every time you run DBCC Freeproccache or DBCC FlushprocinDB. Never use both of them in production environment as it will remove the cache for all the procedures and they (SP) will have to be recompiled the next time they are run. If you suspect there might be some problem in the query plan, you can try to use WITH RECOMPILE option which will recompile the particular stored procedure every time it runs and see how the performance is.
CREATE PROC Test WITH RECOMPILE AS Statement 1 Statement 2
Step12: Finally if all the above options are fine and the query couldn’t be tuned, try to rewrite the query. In few cases as soon as you view the query such as the ones below we need to rewrite the query:
  1. Creating a view with TOP 100% in order to include the ORDER BY clause in view definition where the view will not be sorted unless we explicitly sort the view by issuing
Select * from view order by column1 Result will be sorted Select * from view Result will NOT be sorted even though there is a ORDER BY clause in the view definition. Thus there is a extra cost involved in sorting by using the ORDER BY clause in view definition even though the result is NOT sorted. Hence we should avoid ORDER BY in view definition and instead use it as Select * from view order by column1
  1. Using correlated sub queries will cause RBAR Row by agonizing Row and will affect the performance.
  2. Avoid using Scalar functions in select statements and instead use Inline or Table valued function. Since Scalar function behaves like a cursor we need to avoid it being referenced in the Select statement

 

Index Fragmentation in SQL Server 2005

Index fragmentation is a phenomenon where the index contents are scattered. Normally the contents are in contiguous fashion which helps in fast retrieval of the underlying data. When the indexes are fragmented the data access becomes time consuming because of the scattered data that needs to be searched and read. Fragmentation occurs as data is modified. The following are the two types of Index fragmentation:

  1. Internal fragmentation
  2. External fragmentation
Internal fragmentation: This happens when space is available within your index page i.e. when the index pages have not been filled as full as possible. Due to internal fragmentation the index is taking up more space than it needs to. Thus when scanning the index it results in more read operations. Internal fragmentation also happens due to specifying a low value of fill factor (which determines the % of space to be filled in a leaf level page). This is also caused by rows that are removed by DELETE statements or when pages are split and only filled to about half. Empty space on pages means there are less rows per page, which in turn means more page reads.

 External Fragmentation: External fragmentation occurs when the pages are not contiguous on the index. If the pages in a book are NOT ordered in a logical way (page 1, then page 2, then page 3 and so on) causing you to go back and forward to compound the information and make sense of the reading. External fragmentation happens when there are frequent UPDATES and INSERTS in a table having small amount of free space in the index page. Since the page is already full or only has less free space left and if it is not able to accommodate the new row inserted or updated, as a result Page split happens in order to allocate the new row. Due to page split, original page will be split such that half the rows are left on the original page and the other half is moved to the new page. Mostly the new page is not contiguous to the page being split. Page split is an expensive operation and should always be avoided. How to determine fragmentation ? The following query will give the fragmentation information of a particular table named person.address in adventureworks database. Please modify the query to replace the database name and table name according to your requirements.

1
2
3
SELECT CAST(DB_NAME(database_id) AS VARCHAR(20)) AS [DATABASE Name],
CAST(OBJECT_NAME(OBJECT_ID) AS VARCHAR(20)) AS [TABLE NAME], Index_id, Index_type_desc, Avg_fragmentation_in_percent, Avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),OBJECT_ID('person.address'),NULL,NULL,'Detailed')

If you wish to identify the fragmentation information for the tables in a particular database please use the below query. I am using it to find the fragmentation in Adventureworks database.

1
2
3
SELECT CAST(DB_NAME(database_id) AS VARCHAR(20)) AS [DATABASE Name],
CAST(OBJECT_NAME(OBJECT_ID) AS VARCHAR(20)) AS [TABLE NAME], Index_id, Index_type_desc, Avg_fragmentation_in_percent, Avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'Detailed')
Take a look at the output of the following columns in the above query: Avg_fragmentation_in_percent: If the value is >5 and <30 you need to REORGANIZE the index using ALTER index REORGANIZE command. If the value is >30 you need to REBUILD the indexes using ALTER index REBUILD command. Avg_page_space_used_in_percent: This value represents the amount of page space used in an index. If the value is <75% and >60% we need to REORGANIZE the indexes else REBUILD the indexes. Defragmenting the Indexes: We need to either use ALTER INDEX REBUILD or ALTER INDEX REORGANIZE commands to remove the fragmentation from the indexes. Generally its advisable to schedule a job to do this operations in OFF-Production hours as they consume lots of resources.

 

SQL Server 2005 Best Practices

 article lists the best practices for SQL Server 2005 Tempdb:

  1. Set recovery model as Simple.
  2. Allow tempdb files to grow automatically until the disk is full
  3. Set the file growth for tempdb in terms of 10%
  4. Enable Instant file initialization to speed up Auto growth, addition of new files
  5. Put tempdb database files in a separate drive different from those of user databases
  6. Create multiple tempdb database files and place them in different drives
Memory:
  1. Physical Address Extension (PAE) enables x86 processors to access up to 128 GB of physical memory. X64 windows does not support this option
  2. The /3GB switch makes a full 3 GB of virtual address space available to applications and reduces the amount available to the system to 1 GB
  3. Using /PAE with /3GB limits the operating system (32 bit) with only 16GB of physical memory
  4. AWE is a set of extensions to the memory management functions of Windows that allow applications to address more memory than the 2-3 GB that is available through standard 32-bit addressing. AWE is not required and cannot be configured on 64-bit operating systems.
  5. Enable lock pages in memory in x64 if SQL Server memory is getting paged out
  6. Enable max and min server memory settings while running multiple instances of SQL Server
Database Engine:
  1. Database backup files and database files must be on separate volumes
  2. Place the log files in uncompressed drives
  3. For high critical databases use Full Recovery model with regular transaction log backups
  4. Enable default trace to capture DDL changes
  5. Enable Instant File initialization for databases to speedup auto growth
  6. Enable Blocked process threshold to capture blocked processes
  7. Use the same collation for user and system databases
  8. Set the database option AUTO_Close to OFF for all databases
  9. Set the database option AUTO_Shrink to OFF for all databases
  10. Set the max degree of parallelism option to 8 or less
  11. Set the Auto_update_statistics option to ON for all databases
Replication:
  1. Enable Latency warning for transactional replication topology
  2. Replicate stored procedure execution when making batch updates to published tables.
  3. Set XACT_Abort ON while replicating stored procedures
  4. Run snapshot agent only during off-peak hours
  5. Place the snapshot folder on a drive local to the Distributor that is not used to store database or log files.
  6. Consider specifying a recovery model of simple or bulk-logged while applying snapshot to a subscriber.
Optimizing Indexes:
  1. Rebuild the indexes if the fragmentation is 30% or above else reorganize the indexes
  2. Perform them during off-peak hours since it requires a lot of resources and causes blocks
  3. Set the Fill factor as 100 for highly read only tables with less number of updates
  4. Set the Fill factor as 70 for highly write intensive tables
  5. Set the Fill factor as 80 to 90 for highly read and write intensive objects
  6. Check the integrity of all the databases using DBCC Checkdb for every 14 days
  7. Set the AUTO_UPDATE_STATISTICS option to ON to automatically update the statistics
  8. Set the AUTO_UPDATE_STATISTICS_ASYNC to ON to use asynchronous statistic updating
  9. Set the STATISTICS_NORECOMPUTE to OFF in the Alter index statement to enable auto update statistics
  10. CREATE INDEX and ALTER INDEX REBUILD are always minimally logged in Bulk_logged and Simple recovery model, but the rest are always fully logged so its advisable to change the recovery model to Bulk logged or Simple during these operations

Wednesday 15 May 2013

How to Get Exclusive Access to SQL Server Database

Introduction

There are scenarios when database administrator needs to disconnect all the connected users from a SQL Server Database to get exclusive access of the Database. The exclusive access is needed before restoring a database, before a database can be detached, to perform critical maintenance tasks, to recover a database which is in Suspect Mode etc. In this article we will take a look at how database administrator can leverage ALTER DATABASE Commands to disconnect users from a database.

Using ALTER DATABASE SET Options

Database administrator can executed ALTER DATABASE command to get exclusive or restricted access of a database.

Get Restricted Access to a SQL Server Database

Database administrator can execute the below mentioned TSQL command to get Restrictive access to a database.
ALTER DATABASE DatabaseName
SET
RESTRICTED_USER WITH ROLLBACK IMMEDIATE
GO

Get Single User Access to a SQL Server Database

Database administrator can execute the below mentioned TSQL command to get a Single User access to a database.
ALTER DATABASE DatabaseName
SET
SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

Give Multi User Access to a SQL Server Database

Database administrator can execute the below mentioned TSQL command to give Multiple User access to a database.
ALTER DATABASE DatabaseName
SET
MULTI_USER
GO

Difference Between SINGLE_USER, RESTRICTED_USER & MULTI_USER Alter Database SET Commands

SINGLE_USER
When SINGLE_USER WITH ROLLBACK IMMEDIATE command is used only one user can connect to the database at a time.
RESTRICTED_USER
When RESTRICTED_USER WITH ROLLBACK IMMEDIATE command is used any number of users who are in DB_OWNER, DB_CREATOR or SYSADMIN roles can connect to the database.
MULTI_USER
When MULTI_USER command is used any number of users who have rights to connect to the database will be able to connect to the database.

How to Identify Currently Used SQL Server Authentication Mode

Introduction Microsoft SQL Server Support basically supports two types of Authentication Modes to using which one can connect to an instance of SQL Server 2000 and higher versions. Two different authentication modes are Windows Authentication Mode & SQL Server and Windows Authentication Mode (Mixed Mode). In this article we will take a look at how you can verify the currently used authentication mode and how to change the authentication mode using TSQL and SQL Server Management Studio.

Different ways by which SQL Server Authenticate Users

Windows Authentication Mode: - When Windows Authentication Mode is used only windows logins can connect to SQL Server. Windows authentication is much more secure when compared to mixed authentication as in an enterprise environment Windows Login credentials are generally Active Directory domain credentials.
Mixed Mode / SQL Server and Windows Authentication Mode Authentication: - When Mixed Mode aka SQL Server and Windows authentication mode is used either Windows Logins or SQL Server Logins can be used to connect to SQL Server.

Identify SQL Server Authentication Mode Using TSQL

Database administrator can use the below mentioned TSQL code to identify the currently used SQL Server Authentication Mode.
Use Master
GO

SELECT

            CASE
SERVERPROPERTY('IsIntegratedSecurityOnly')
           
WHEN 0 THEN 'Mixed Mode - Allows Both SQL Server and Windows Authentication Mode'
            WHEN 1 THEN 'Allows Only Windows Authentication Mode'

END
AS [Currently Used SQL Server Authentication Mode]
GO
SQL Server Authentication Mode

You can execute the below mentioned TSQL code to identify the login security configuration on an instance of SQL Server. As per Microsoft one can use this only for backward compatibility purpose.
Use Master
GO

EXEC
xp_loginconfig
GO
xp_loginconfig

Identify SQL Server Authentication Mode Using SQL Server Management Studio

Database administrators can also identify the SQL Server Authentication Mode using SQL Server Management Studio.
1. Connect to an Instance of SQL Server using SQL Server Management Studio
2. In Object Explorer, right click the SQL Server Instance and select Properties from the drop down list.

Q: Which solution, SAN Replication, Windows and SQL Clustering, or Database Mirroring should be used for which situations?
A: These are three very different technologies that for the most part aren't competing with each other, but instead are complementary to each other.
SAN Replication vs. SQL Clustering
There's no competition at all here. In fact when doing a geographically distributed cluster (where the SQL Cluster is physically located in two different data centers) SAN Replication is required to handle keeping the data in sync between the two sites. The SAN Replication runs under the cluster so that both storage arrays (one in each site) are kept in sync using either synchronous or asynchronous replication.
SAN Replication vs. Database Mirroring
SAN Replication and Database Mirroring are sort of competing technologies as they both do data replication from one location to another. However when using SAN Replication there isn't a way to automatically fail over like there is with database mirroring.  SAN Replication can fail over automatically when used with a Windows Cluster and when you have software which is installed on the Windows Cluster and tells the storage array to fail over. However without a Windows Cluster in place, just having a server sitting waiting for the storage array to present the LUN(s) to the backup server would probably require either some impressing scripting to get the storage array up and running automatically or some manual configuration.
Related: Reasons not to configure automatic failover for database mirroring
SQL Clustering vs. Database Mirroring
With SQL Server clustering and database mirroring we are getting into a competition as SQL Server clustering and database mirroring are semi-competing features. With SQL (and Windows) clustering the database instance runs on one of several machines which are all grouped together.  All the databases are failed over from one machine to another as a group using the same name and IP address. The downside with SQL clustering is that there is the requirement of having shared storage as either a shared disk array or a large storage array.
With database mirroring the databases are failed over individually not in a group. With database mirroring however the newer SQL Server Native Client needs to be used so that database failover can be automatic.  Without the newer SQL Native Client the client application's connection string would need to be reconfigured with the new active server name.
If you are looking for instance specific failover, and you are ok with having just a single copy of the database then clustering is probably the better option. If you prefer database level failover and you prefer having the second copy of the data then mirroring is probably the better option for you. Mirroring also has an advantage for some companies as there is no requirement for an Active Directory domain so if you are hosted with Amazon's EC2 for example mirroring is still an option where clustering isn't.
Like everything else in the SQL Server world there's no guaranteed correct answer every time for which (if any) of these technologies you should be using. Some, all or none of them may have a place in your shop today.
The new SQL Server 2008 R2 lineup includes
  • SQL Server 2008 R2 Parallel Data Warehouse Edition
  • SQL Server 2008 R2 Datacenter Edition
  • SQL Server 2008 R2 Enterprise Edition
  • SQL Server 2008 R2 Developer Edition
  • SQL Server 2008 R2 Standard Edition
  • SQL Server 2008 R2 Web Edition
  • SQL Server 2008 R2 Workgroup Edition
  • SQL Server 2008 R2 Express Edition (Free)
  • SQL Server 2008 Compact Edition (Free)
More detailed information about the SQL Server 2008 R2 editions, their pricing, and the features that they support can be found in Table 1. SQL Server 2008 R2 supports upgrading from SQL Server 2008, SQL Server 2005, and SQL Server 2000.

Table 1: SQL Server 2008 R2 Editions
SQL Server 2008 R2 Editions
Pricing
Significant Features
Parallel Data Warehouse

$57,498 per CPU
Not offered via server CAL
MPP scale-out architecture
BI—SSAS, SSIS, SSRS
Datacenter

$57,498 per CPU
Not offered via server CAL

64 CPUs and up to 256 cores
2TB of RAM
16-node failover clustering
Database mirroring
StreamInsight
Multi-Server management
Master Data Services
BI—SSAS, SSIS, SSRS
PowerPivot for SharePoint
Partitioning
Resource Governor
Online indexing and restore
backup compression
Enterprise

$28,749 per CPU
$13,969 per server
     with 25 CALs

64 CPUs and up to 256 cores
2TB of RAM
16-node failover clustering
Database mirroring
Multiserver management
Master Data Services
BI—SSAS, SSIS, SSRS
PowerPivot for SharePoint
Partitioning
Resource Governor
Online indexing and restore
Backup compression
Developer
$50 per developer
Same as the Enterprise Edition
Standard

$7,499 per CPU
$1,849 per server
      with 5 CALs

4 CPUs
2TB of RAM
2-node failover clustering
Database mirroring
BI—SSAS, SSIS, SSRS
Backup compression
Web

$15 per CPU per month
Not offered via server CAL
4 CPUs
2TB of RAM
BI—SSRS
Workgroup

$3,899 per CPU
$739 per server
     with 5 CALs
2 CPUs
4GB of RAM
BI—SSRS
Express Base
Free
1 CPU
1GB ofRAM
Express with Tools
Free
1 CPU
1GB of RAM
Express with Advanced Services
Free
1 CPU
1GB of RAM
BI—SSRS
(for the local instance)

Support for Up to 256 Processor Cores

On the hardware side, SQL Server 2008 R2 Datacenter Edition now supports systems with up to 64 physical processors and 256 cores. This support enables greater scalability in the x64 line than ever before. SQL Server 2008 R2 Enterprise Edition supports up to 64 processors, and Standard Edition supports up to four processors.

It’s noteworthy that SQL Server 2008 R2 remains one of the few Microsoft server platforms that is still available in both 32-bit and 64-bit versions. I expect that it will be the last 32-bit version of SQL Server that Microsoft releases.

PowerPivot and Self-Service BI

Without a doubt, the most publicized new feature in SQL Server 2008 R2 is PowerPivot and self-service BI. SQL Server 2008 R2’s PowerPivot for Excel (formerly code-named Gemini) is essentially an Excel add-in that brings the SSAS engine into Excel. It adds powerful data analysis capabilities to Excel, the front-end data analysis tool that knowledge workers know and use on a daily basis. Built-in data compression enables PowerPivot for Excel to work with millions of rows and still deliver subsecond response time. As you would expect, PowerPivot for Excel can connect to SQL Server 2008 databases, but it can also connect to previous versions of SQL Server as well as other data sources, including Oracle and Teradata, and even SSRS reports. In addition to its data manipulation capabilities, PowerPivot for Excel also includes a new cube-oriented calculation language called Data Analysis Expressions (DAX), which extends Excel’s data analysis capabilities with the multidimensional capabilities of the MDX language. Figure 1 shows the new PowerPivot for Excel add-in being used to create a PowerPivot chart and PowerPivot table for data analysis.

1. SQL Server 2008 R2 Parallel Computing Edition

One huge change is the Parallel Computing edition (formerly code-named Madison), essentially the SQL Server equivalent to Windows Server High Performance Computing (HPC) Edition. It will utilize a Massively Parallel Processing (MPP) scale-out architecture to support data warehousing ranging from 10TB to 1+ PB. The Parallel Computing Edition is priced at $57,489 per processor.

2. SQL Server 2008 R2 Datacenter Edition

The biggest change is undoubtedly the new SQL Server 2008 R2 Datacenter Edition. This edition parallels Windows Server Datacenter Edition, and with support from up to 256 logical processors, it's designed to address the highest levels of scalability. The Datacenter edition has no memory limitation and offers support for more than 25 instances. It also will be priced at $57,489 per processor.

3. SQL Server 2008 R2 Enterprise Edition and Developer Edition

The Enterprise edition is the one that’s targeted toward most businesses. With the advent of the new Datacenter edition, the Enterprise edition will be limited to support for 64 logical processors and 25 instances, which really isn’t much of a change. It will be priced at $28,749 per processor or $13,969 per server with 25 CALs. The Developer edition shares the same feature set and is licensed at $50 per developer.

4. SQL Server 2008 R2 Standard Edition

Designed for small and medium businesses, the Standard edition supports up to 4 CPUs and 2TB of RAM, with unlimited database size. It includes support for all BI subsystems including Analysis Services, Integration Services, and Reporting Services, but lacks many of the enterprise-oriented features from the Enterprise edition. It will be priced at $7,499 per processor or $1,849 per server with five CALs.
Related: SQL Server 2008 R2 Appliances

5. SQL Server 2008 R2 Web Edition

This edition is designed to be run by web hosting providers. It provides support for up to 4 CPUs, 2TB of RAM, and unlimited database size. In addition to its relational database engine, SQL Server 2008 R2 Web Edition supports Reporting Services. It's priced at $15 per processor per month.

6. SQL Server 2008 R2 Workgroup Edition

Workgroup edition is designed to provide basic relational database services for departments and branch offices. It supports a maximum of 2 CPUs, 4GB of RAM, and unlimited database size. Its only BI feature is Reporting Services. It's priced at $3,899 or $739 per server including five CALs.

7. SQL Server 2008 R2 Express Edition

Targeted toward developers and ISVs, SQL Server Express Edition comes in three versions. SQL Server 2008 R2 Express provides relational database services; SQL Server 2008 R2 Express with Tools Edition adds SSMS; and SQL Server 2008 R2 Express with Advanced Services adds SSMS and a local instance of Reporting Services. All of the SQL Server 2008 R2 Express editions will continue to be free downloads.