Wednesday 9 October 2013

What are the top SQL Server Disaster Recovery Solutions ?
SQL Server is an enterprise solution which provides various solution to cover DR scenarios. A disaster recovery includes money and solution depends on how much money you want to spend on DR solution. Here is list of solutions, which range from few $$$ to million dollar solutions.

Solution 1 – SQL Server HADR 
HADR” is a high-availability and disaster recovery solution introduced in SQL Server 2012. This is the only solution which provides both high availability  as well as data recovery in case of a disaster on any nature.
10 SQL Server Disaster Recovery Solutions top sql server disaster recovery solution sql server recovery sql server high availibility solutions sql server disaster recovery
PROS
  • Multi-Database Failover
  • Multiple Secondary’s
  • Active Secondary’s
  • Fast Client Connection Redirection
  • Windows Server Core
  • Multisite Clustering
CONS
  • Expensive Solution
  • Complicated / Complex Environment
  • Woks only with SQL Server 2012 and Window Server 2008 onward only
If you want get answer, “How to Implement HADR”, please check my previous post, “How to Implement HADR – Step by Step Guide with screenshots

Solution 2 – SQL Server Clustering
This is one of the best High Availability industry proven Solution. As part of this solution, be bind SQL Services with two Physical Servers which shares the common storage in such a way that SQL Services will automatically failover to other available node if a hardware failure or a software failure occurs on first node.
10 SQL Server Disaster Recovery Solutions top sql server disaster recovery solution sql server recovery sql server high availibility solutions sql server disaster recovery

PROS
  • HIGH availability solution as Services were always available
  • Supported by all versions of SQL Server starting from SQL Server
  • Automatic Failover
  • No loss of Data
  • Coupled with SAN as shared array for storage
CONS
  • This is not a DR solution - Clustering doesn’t cover disk array (Storage) failure as all clustered nodes shares the same storage.
  • Restricted to use local subnet only ** This restriction is removed in Windows Server 2008 onwards.
  • High COST as identical hardware, shared storage is required
  • Failover clustering does not allow you to create failover clusters at the database level or at the database object level, such as the table level.
If you want get answer, “What is Clustering”, “Installing Windows 2008 clustered environment”, “
How to Install SQL Server clustered instance” 

Solution 3 – Database Mirroring
Database mirroring is an alternative high-availability solution and it also offers DR also. Database mirroring supports automatic failover, but does not require cluster-capable hardware, and can therefore provide a cost-effective alternative to failover clustering.
In a database mirroring solution, a database is stored on one server and copied to another, providing a standby copy of the database that can service clients in the event of a server failure
10 SQL Server Disaster Recovery Solutions top sql server disaster recovery solution sql server recovery sql server high availibility solutions sql server disaster recovery
PROS
  • Increases data protection.
  • Increases availability of a database.
  • Improves the availability of the production database during upgrades.
  • Solution at Granular level instead of Server level this solution works ate database level.
  • can be used as a DR solution too
CONS
  • Standby copy (Mirrored) can not be used, it’s just a stand by
  • Database mirroring is limited to only two servers
  • Database mirroring is limited to only 10 database per servers
  • Need to configure your DSN string to have both mirrored servers in it so that when they switch you notice nothing- In case of Clustering and HADR, this is not required
  • Partners must use the same edition.
  • Asynchronous database mirroring (high-performance mode) is supported only by Enterprise Edition
  • Can not be used for system databases like master, msdb, model databases.
Solution 4 – Log Shipping
Log shipping is primarily a failover solution. As part of this solution, log shipping provides database-level redundancy for SQL Server database by automatically backing up, copying, and restoring transaction logs on standby servers
10 SQL Server Disaster Recovery Solutions top sql server disaster recovery solution sql server recovery sql server high availibility solutions sql server disaster recovery
PROS
  • Standby databases can be available for read-only queries.
  • Multiple standby servers can be configured
CONS
  • Possible data loss when the primary server fails
  • Manual Failover, there is no automatic failover
  • Purely works on LSN, If case database log is truncated by some one, then need to again copy entire database.

Solution 5 – Replication
Replication is the mechanism for creating and maintaining multiple copies of the same data. Replication allows,
• Multiple copies of data can be kept in sync.
• Allows data be closer to users, improving performance.
• Allows a branch office to work when not connected to the main office.
• Separate process and offload work from production servers
PROS
  • Work at Granular level, Can set replication for a specific table, set of tables, or subset of data within a table or tables on one or many other servers.
CONS
  • Need more DBA efforts to manage replication
  • need to manual failover
  • can loose data
If you want to know, “What are the type of replication available in SQL Server”or “How to set up Replication“, Video Tutorial, check my previous post, “Replication with SQL Server 2008

Solution 6 – Native SQL BACKUPS
This is the most basic form of disaster recovery for SQL Server and one that was being practiced in every situation.

Database in Suspect Mode – SQL Server


What to do when the database is corrupt.
1. Don’t panic
2. Don’t detach the database
3. Don’t restart SQL
4. Don’t just run repair.
5. Run an integrity check
Suspect Mode Recovery for SQL Server 2000
Following are the steps to recover database from Suspect Mode

Solution 1
USE Master
GO
– Determine the original database status
SELECT [Name], DBID, Status
FROM master.dbo.sysdatabases
GO
– Enable system changes
sp_configure ‘allow updates’,1
GO
RECONFIGURE WITH OVERRIDE
GO
– Update the database status
UPDATE master.dbo.sysdatabases
SET Status = -32768
WHERE [Name] = ‘YourDatabaseName’
GO
=> Above command will Change database status to Emergency mode, so that database is now made accessible.
=>Run DBCC CHECKDB command to verify database consistency.
=> If DBCC CHECKDB fails with error, depending on the level of corruption and with client permission we can try below commands:
DBCC CHECKDB WITH REPAIR_REBUILD
DBCC CHECKDB WITH REPAIR_ALLOW_DATA_LOSS
– Disable system changes
sp_configure ‘allow updates’,0
GO
RECONFIGURE WITH OVERRIDE
GO
– Determine the final database status
SELECT [Name], DBID, Status
FROM master.dbo.sysdatabases
GO
Solution 2
1.Stop SQL Server services, rename the physical file of transaction log which is corrupt. Restart SQL Services back.
2.Execute these commands one by one:
–  Allow updates in system tables
 EXEC sp_configure ‘Allow updates’, ’1′ Reconfigue with override
–Bring the corrupt db in Emergency mode
Update Sysdatabases
Set status = status | 32768
Where dbid = <dbid>
–Take the db in single user mode
EXEC sp_dboption ‘database name’, ‘Single User’, ‘TRUE’
–Rebuild the log file
DBCC REBUILD_LOG (‘db name’, ‘complete log file path with the original name’)
–  You will see a message when the log file is rebuilt. Do run a Checkdb afterwards
DBCC CHECKDB (‘db name’)
–Review the output of the Checkdb. Couple of lines at the bottom would be of your interest. If it still found corruption, it would suggest repair_rebuild, repair_allow_data_loss etc commands to run Checkdb with, again. This should fix the corruption which still exists. Run them (Again, at your own risk as they may incur data loss)
 – DBCC CHECKDB (‘db name’, repair option suggested by checkdb)
 
Suspect Mode recovery for SQL Server 2005/2008
Follow these steps to recover the database from Suspect Mode
Workaround
When the database is in SUSPECT mode, you can change the database status to the EMERGENCY mode. This could permit the system administrator read-only access to the database. Only members of the sysadmin fixed server role can set a database to the EMERGENCY state.
You can run the following SQL query to get the database to the EMERGENCY mode.
ALTER DATABASE  dbName  SET  EMERGENCY
=> After that, you set the database to the single-user mode. Single user mode allows you to recover the damaged database.(NOT TRUE)
ALTER DATABASE  dbName   SET SINGLE_USER
Then you can run DBCC CheckDB command. This command checks the allocation, structural, logical integrity and errors of all the objects in the database. When you specify “REPAIR_ALLOW_DATA_LOSS” as an argument of the DBCC CheckDB command, the procedure will check and repair the reported errors. But these repairs can cause some data to be lost.
DBCC CHECKDB (‘DatabaseName’) WITH NO_INFOMSGS, ALL_ERRORMSGS
DBCC CheckDB (dbName , REPAIR_ALLOW_DATA_LOSS)
If the above script runs without any problems, you can bring your database back to the multi user mode by running the following SQL command:
ALTER DATABASE  dbName  SET MULTI_USER
Recommendations
Using any DATA LOSS repair options can lead to other problems. This is not a recommended way to recover the database. The database should be restored from a backup made prior to the corruption, rather than repaired.
DBCC CheckDB command should be running fine on working databases at regular intervals to check for errors.
Query to rebuild log file in SQL 2005/SQL 2008
————————————————–
alter database mydatabase rebuild log on
(Name=mydatabase_log,filename=’mydatabase_log.ldf’)
—-
in case there is an allocation error mostly due to system objects…
——–General Info…
DBCC checkdb on a database with the no_infomsgs returns the
following result:
Server: Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID 0, page ID (1:3365824). The PageId in the page header = (0:0).
Server: Msg 8966, Level 16, State 1, Line 1
Could not read and latch page (1:3365824) with latch type SH. sysobjects failed..
The database that you ran DBCC CHECKDB on has a corrupt sysobjects table -
You cannot proceed with Recover option.
You need to do several things:
1) investigate why this happened. It’s almost certainly a hardware problem.
You should look in the SQL Server errorlog and the NT event log for clues.
2) you need to restore this database from your backups. Running repair will
not work and will fail with the same message repeatedly.
 

SQL Server High Availability – A Right Decision

 

Is your proposed HA solution is still pending for approval??
Don’t Worry
We have noticed that it’s not easy for a DBA to determine which HA option to choose when planning business continuity.
Also we should not get confuse or intermingle with any other Backup Solution.
This post is to help people understand High Availability options present in Microsoft SQL Server that any organization can take into consideration before finalizing their strategy.
Implementing High Availability (HA) is becoming easy day by day in recent versions of SQL Server.
Since SQL Server 2005, there are 4 methods to implement HA.
  • Failover Clustering
  • Database Mirroring
  • Replication
  • Log Shipping
Microsoft Failover Clustering (MFC) runs at the operating system level and requires two or more host servers with shared storage
Feature Comparison in different High Availability.
Though, Failover clustering is the most popular option. We can compare some features of each HA option in below table. This gives a good comparison of the features available in each option.
Property \ Feature Failover
Clustering
DB Mirroring Replication Log Shipping
Server Level Yes No No No
Database Level No Yes Yes Yes
Automatic Failover Yes Yes (Only HA Option) No No
Automatic Failure Detection Yes Yes (Only in HA option) No No
Hot, Warm or Cold standby Hot Hot and Warm Options Warm Warm
Server Level Objects Failover Yes (Automatic) Manual Manual Manual
Is data on Failover available for queries while the Primary server is running? No Yes (use Database Snapshots) Yes Yes (Standby Mode)
Are changes allowed at Failover location? No No Yes (Merge and TRN Replication with updating subscribers) No
Is Main and Failover server in a 1-1 relationship? Yes Yes Not Required Not Required
Is 1-Many relationship possible between the Main and Failover servers? No No Yes Yes
Is Many-1 relationship possible between Main and Failover server? No No Yes No
Is Automatic Application redirection possible after failover? Yes Yes No No
Recovery model required for the participating databases? Any FULL ANY FULL & BULK
Can you filter what data is sent to the Failover server? No No Yes No
Where does the logic HA reside? OS + Sql Sql Replication Agents (they run outside Sql Server Sql Agents (Run outside Sql Server)
If you want a reports server, what is the best solution? Not Possible Works, but needs constant refresh of the Snapshot database for latest data. Best Option. Works, but users connections need to be kicked out, when new log is applied.
Best solution if the Main and Failover server are separated by long distance? Not good. Works, but not recommended. Works, but with high latency Best Option. (Log compression reduces latency significantly.
We need to understand the difference between High Availability (HA) and Backup solution.
Note: HA is not Backup. HA is not your Business Continuity Plan (BCP). HA is not your backup.
Please read the below related article thoroughly.
http://www.houseofbrick.com/network-architecture/138-Microsoft-SQL-Server-High-Availability-Options

 

SQL Server Performance


No doubt, sometime we get confused on what to check and what not to check when we get to know any performance issue….
Generally we just take on everything and start doing our analysis, which is not recommended.
eg. if there are missing indexes, there is no point to perform index maintenance again and again and reaching nowhere.
I have been searching for long time on various Performance Improvement Stages….and now I got it….Lets discuss….
It’s highly suggested to opt below three stages, step by step, if you ever face any Performance Issue.
STAGE 1
-  Check for Blocking   [sp_who2]
-  Check for deadlock  [enable trace to read sql error log]
-  Check for Index Fragmentation [index_physical_stats, ALTER INDEX etc.]
-  DBCC opentran [check for any long running transaction]
-  Run Activity Monitor [Monitor CPU memory etc.]
-  DBCC [sqlperf, check for logspace and plan taking log backups]
-  DBCC [free some system and procedure cache]
STAGE 2….. Go for this stage if you are through with Stage 1 and still issue persists.
-  Performance Monitor [cpu, memory, network, disk i/o, buffer cache etc.]
-  Update the Statistics [sp_updatestats].
-  Enable AUTO_UPDATE_STATISTICS_ASYNC option as well if you do not perform weekly update statistics and database’s auto_update_statistics option is enabled.
The Query Optimizer can execute your queries without waiting for updated statistics when AUTO_UPDATE_STATISTICS_ASYNC option is used/enabled. Query Optimizer will go ahead and build an execution plan with the out-of-date statistics and then update statistics in the background to help the next query’s performance.
It is recommended NOT to use this when your queries can wait until the statistics are updated.
-  Check for Missing Indexes.
-  Check actual execution plan for long running queries and have cost analysis. [select queries only]
-  Analyse select queries in query analyser for logical and physical reads [set statistics io on]
-  Capture Top High CPU queries and clear the SPIDs, if possible.
-  DBCC commands [check for memory usage specially buffer pool]
This query is to analyze the buffer pool, the amount of space being taken up and how much of that space is empty space. This may take enough time to run based on how much memory you have, simple :) … You can then plan to clear system or buffer cache.
SELECT
    (CASE WHEN ([database_id] = 32767)
        THEN N'Resource Database'
        ELSE DB_NAME ([database_id]) END) AS [DatabaseName],
    COUNT (*) * 8 / 1024 AS [MBUsed],
    SUM (CAST ([free_space_in_bytes] AS BIGINT)) / (1024 * 1024) AS [MBEmpty]
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id];
GO
-  DBCC Loginfo [Reduce the number of virtual logs file if its more than 50]
-  The data and log file should not grow frequently, the initial data and log files should have enough space and set atleast 1024mb / 1gb of auto_growth.
-  Do NOT shrink your database or the data file at all, this results fragmentation in your data.
(I am in a short of disk space and have to reclaim space quickly at any cost, is there any work around….?)
Yes, Here it is
Even if you have to shrink DB by any reason to reclaim the space to disk, ensure you ONLY reclaim unallocated space by shrinkfile. [use sp_spaceused]
-  Check if you are doing recompilation during weekly maintenance [sp_recompile]
-  Check for database options like auto_updatestats and Auto shrink etc.
STAGE 3 Finally these are the options to cover everything.
- SQL Profiler [select events and filter database, host etc.]
-  Tempdb [add new data files based on number of cpu]
- Move Heavily used indexes to different filegroups.
- Check for fill factor settings of your indexes.
- Check for MaxDOP setting [sp_configure]
- Virtual Memory / Paging configuration
- VAS (Virtual Address Space), 3G / PAE switches and AWE configuration [depends upon sql and os edition]
- Lock Pages in Memory [check if you really need to enable it]
- DBCC Checkdb [for integrity check in the database]
Note: I am always looking for valuable feedback and suggessions to improve these three stages. Let me know if you have anything to add or modify here.
To ensure SQL server is performing well there are certain areas we should give attention as priority after analysing the performance bottlenecks.
These are :
- Basic Performance Investigation
- General Concepts
- Fill Factor settings
- Index Fragmentation
- Max Degree of Paralism
- Filegroups and Data Files
- Tempdb configuration.
- Blocking and Deadlocks.
- Finding Missing Indexes, Better to use Covering Indexes.
- Apart from this I have also tried to cover below important topics.
- Basic Commands (DBCC & DMVs)
- Estimating Size of a Table
Basic Commands (SPROC, DBCC & DMVs)
Here are few SPs, Commands/views a SQL DBA must remember as they are used on day to day activities.
Important System Stored Procedures
- SP_attach_db & SP_detach_db
- SP_addserver & SP_dropserver
- SP_Configure
- SP_Change_users_login
- SP_Changedbowner
- SP_clean_db_free_space
- SP_dbcmptlevel
- SP_databases
- SP_dboption
- SP_grantlogin & SP_grantdbaccess
- SP_Help & SP_Helptext
- SP_Helpdb
- SP_Indexes
- SP_Logins
- SP_Lock
- SP_Monitor
- SP_Renamedb
- SP_recompile
- SP_Statistics
- SP_Stored_procedures
- SP_Spaceused
- SP_Updatestats
- SP_Who2
- XP_fixeddrives
- XP_logininfo
DBCC (Database consistency Checks)
- DBCC SQLperf(logspace)
- DBCC CheckDB
- DBCC CheckDB with Physical_only
- DBCC Shrinkfile
- DBCC Loginfo
- DBCC Showcontig
- DBCC DBREINDEX
- DBCC Indexdefrag
- DBCC UpdateStatistics
Important DMVs (Dynamic Management Views)
- Sys.dm_db_index_usage_stats
- Sys.dm_db_index_operational_stats
- Sys.dm_db_index_physical_stats
- Sys.dm_db_missing_index_columns
- Sys.dm_db_missing_index_details
- Sys.dm_db_missing_index_group_stats
- Sys.dm_db_missing_index_groups
- Sys.dm_os_performance_counters
- Sys.dm_exec_requests
Thanks

Migrating SQL Server Databases – The DBA’s Checklist



It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios:
1.     A new  database application is rolled out in a production server from the development or test environment
2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes
3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle
4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration
5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server
6.     A database has to be restored from a backup file provided by a third party application vendor
7.     A backup of the database is restored in the same or different instance for disaster recovery
8.     A database needs to be migrated within the same instance:
a.     Files are moved from direct attached storage to storage area network
b.    The same database is copied under a different name for another application
Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself.
Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination.
Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved.
Step 1:  Make a copy of data and log files when attaching and detaching
When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following:
Server: Msg 602, Level 21, State 50, Line 1
Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes.
Connection Broken
If you try to backup the attached database and restore it in the source, it will still fail.
Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here.
When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases.

Step 2: Change database compatibility level
Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic.
Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.
image002
Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).
Step 3: Run DBCC UPDATEUSAGE
Run the DBCC UPDATEUSAGE command against the migrated database when upgrading to a newer version of SQL Server. This is particularly important if you are migrating from a pre-SQL Server 2005 version. Up until SQL 2005, the table or index row counts and the page counts for data, leaf, and reserved pages could become out of synch with time in the database system tables. DBCC UPDATEUSAGE command corrects these inaccuracies and ensures the counts are updated.
To run the DBCC UPDATEUSAGE command against your target database, execute the following:
DBCC UPDATEUSAGE(database_name) WITH COUNT_ROWS

If there are no inaccuracies found, the command does not return any data. If there are inaccuracies found and DBCC UPDATE USAGE fixes these inaccuracies, the detailed report is shown in the output.
You should run the DBCC UPDATEUSAGE command before you run DBCC CHECKDB (discussed next). From SQL Server 2005, the DBCC CHECKDB command has been modified to trap errors related to incorrect row or page counts. However If the counts are wrong in the source database and you run DBCC CHECKDB against your newly restored/attached database, you will get a warning message like the following:
Msg 2508, Level 16, State 3, Line 1
The In-row data RSVD page count for object “table_name“, index ID 0, partition ID 58037252456448, alloc unit ID 58037252456448 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.

Step 4: Run DBCC CHECKDB
DBCC CHECKDB is a commonly used command that can check the allocation, structural and logical integrity of a database and its objects. Although it can be used to fix minor issues with the built-in REPAIR options, they should not be used the first time you are running it against your migrated database. This is because you want to know if the database you just installed is structurally okay. You would execute a command like the following:
DBCC CHECKDB(database_name) WITH ALL_ERRORMSGS
Starting from SQL Server 2005, DBCC CHECKDB has another option: DATA_PURITY.  This option can detect field values that are invalid or out-of-range. An example of such an anomaly would be a datetime field with values outside the acceptable range. For databases upgraded from pre-SQL Server 2005 versions, DBCC CHECKDB does not automatically check column values until the command has been run with the DATA_PURITY option error free. After that, you do not need to specify this option as DBCC CHECKDB will automatically check for entity integrity in the field values.
If everything is okay, the command will run successfully and you will see something like the following at the end of the output:
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘database_name‘.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Step 5: Update statistics
image004
The AUTO_CREATE_STATISTICS option ensures SQL Server automatically generates any missing statistics during the query optimisation phase. As statistics can become out-of-date and stale over time, setting the AUTO_UPDATE_STATISTICS option to ON ensures the query optimizer will automatically update any out-of-date stats during optimisation.
I would usually keep the “Auto Update Statistics Asynchronously” option to its default value of “false”. This option – introduced in SQL Server 2005 and also present in SQL Server 2008 – controls how SQL Server will behave if it finds any out-of-date statistics during the optimisation phase. Setting this option to “true” will let the query run with the old statistics. It will then update the stats outside the query.  This ensures subsequent runs of the query will use the new statistics. When this property is set to false, query optimiser will wait till new statistics are generated.
Step 5: Update statistics
It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database:
sp_updatestats
The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command.
Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS.
Step 6: Set database options
You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases.
Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE.
Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.
Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents.
When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.
 Microsoft suggests using the CHECKSUM page verify option as this offers more protection.

Step 7: Map database users to logins
A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic.
You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though.
The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login.
If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following:
sp_change_users_login
        @Action = ‘Update_One’,
       
@UserNamePattern = ‘bob’,
       
@LoginName = ‘bob’
If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account.
Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.
Step 8: Recovery model considerations and backup requirements
If the database you have migrated or installed can be classed as “production”, ensure it is included in your backup plan.
One of things the recovery model of a database dictates is the type of backups that can be performed against it. For example, transaction log backups cannot be done against a database set to simple recovery mode. The restored or attached database will retain its original recovery mode, so you will need to check it once the database has been migrated.
If your server has a maintenance plan set up that backs up the transaction logs of “all user databases”, the log backup step will fail if the new database is in simple recovery mode. To get around, you will either need to modify the maintenance plan to exclude the new database or change the database’s recovery model to full.
Even if the restored database is in full recovery mode, you will need to take a full backup of it if transaction log backups are going to be taken later. Unless the new database has been fully backed up once, log backups will not have a base to start from and will fail. In such cases you will have a message like the following:
Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

Step 9: Changing database code and associated configurations
This step will probably be required if your database is being migrated from an older version of SQL to a newer one. The DBA will be working with developers and system architects to identify commands and keywords that are no longer supported or those that can be replaced with newer features. Unlike most of the steps described above, this is not a simple, one off procedure but can comprise of a series of iterations.
For example, if the database is migrated from SQL Server 2000 to 2005 or 2008, code that uses xp_sendmail may need to be changed to use the newer database mail feature with the sp_send_dbmail stored procedure.  Likewise, isql and osql commands in stored procedures should be changed to make use of the sqlcmd utility.
Note that some of these features may require the DBA to make server or database level changes. For example, the Surface Are Configuration tool (or facet in SQL 2008) may need to be used to enable xp_cmdshell, CLR code, database mail or older style OPENROWSET queries.
Developers and DBAs will also need to look for linked server references within the code. New linked servers may need to be created and the code accordingly modified.
Not all required configuration changes may be evident from the code either. Databases tables sometimes store application specific metadata such as connection strings or server names. DBAs and developers will need to modify these data to reflect any changes.
A database property called TRUSTWORTHY controls whether CLR assemblies marked with EXTERNAL_ACCESS or UNSAFE mode will be able to access resources outside the databases. This property also dictates whether functions and procedures that use impersonation via EXECUTE AS clause can access resources outside the database.
By default, SQL Server keeps the TRUSTWORTHY property of a database to OFF. Even if your source database has this property set to ON, it will automatically revert to OFF when you detach and reattach the database. This is to ensure that no malicious user can detach and attach a user database and add malicious code there. To allow the newly attached database’s code to access external resources, the TRUSTWORTHY property needs to be set explicitly:
ALTER DATABASE database_name SET TRUSTWORTHY ON
Also there are additional requirements for CLR assemblies to run successfully. The DBA will probably need to work closely with the developers in this area.
Step 10: Full-text catalogs and full-text indexing
This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems.
Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points.
First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way:
Msg 9954, Level 16, State 2, Line 1
The Full-Text Service (msftesql) is disabled. The system administrator must enable this service.
This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager.
Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it.
Msg 7624, Level 16, State 1, Line 1
Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog.
A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server.
Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination.

Step 11: Database collation considerations
Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command:
ALTER DATABASE database_name COLLATE collation_name
You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected.
And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following:
Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘.
That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other.
Step 12: Service Broker considerations

Service broker is a new feature available from SQL Server 2005 that allows database applications to take advantage of asynchronous messaging between senders and recipients.  When restoring or attaching a broker enabled database, be mindful of two things: broker identifier and message delivery status.
Each SQL Server 2005 database has a unique “broker identifier” that sets it apart from any other database in the network. The broker identifier is a GUID and is assigned by SQL Server every time a new database is created. You can have a look at the broker identifier of the databases in an instance by executing the following command:
SELECT name, service_broker_guid FROM sys.databases
SQL Server assigns this identifier so that all messages for a service broker conversation are rightfully delivered to the correct database. If you think about it, this is required: if you want to talk to a person in a meeting, you would want to address him by name. And if more than one person around the table has the same name, you would want to ensure only the right person gets the message. So you will follow some method:  looking at the person and perhaps using a polite gesture of hand to mean you are referring to him. Likewise, SQL Server also wants to ensure messages know what their destination database is. Hence this unique way of identifying a database is used.
Apart from assigning the broker identifier, the database can be marked as “broker enabled”. When broker enabled, service broker ensures messages are delivered from the database and oncoming messages can also reach the services defined within the database.
If you want to see which databases are broker enabled in the current instance, you can execute a query like the following:
SELECT name, is_broker_enabled FROM sys.databases
However, what happens when you are restoring or attaching a database in an instance that has already another database with the same service broker GUID? This is a valid situation – you can be restoring the same database in the same or different instance multiple times under different names. How does SQL Server know what database oncoming messages will be routed to?
The answer is SQL Server will disable message delivery (i.e. disable broker) in such cases. When you restore or attach a database, the service broker identifier remains intact.  The message delivery status is also preserved if there are no other databases in the instance with the same identifier. However, if you restore or attach the same database again, the message delivery option will be turned off in the newly attached / restored database. When message delivery option is turned off, no messages from the database will be going out and service broker will not recognise the services defined within the database as eligible for receiving messages. This is a bit like being suspended out of a game indefinitely.
As an example, I restored three copies of the AdventureWorks database in an instance. I then executed the following command:
SELECT name, is_broker_enabled, service_broker_guid
FROM  sys.databases
WHERE  name LIKE ‘Ad%’
The result looks like the following:
image005
As you can see, all three databases have the same service broker GUID, but only one database has the broker enabled.
You can enable a database’s message delivery status by using the following command:
ALTER DATABASE database_name SET ENABLE_BROKER
However, when you try to enable service broker for a database in an instance where another database already exists with the same service broker GUID and is broker-enabled, you will get a message like the following:
Msg 9772, Level 16, State 1, Line 1
The Service Broker in database “database_name” cannot be enabled because there is already an enabled Service Broker with the same ID.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
This is why you need to be careful when restoring or attaching a service broker database application so that only the correct database can be made broker-enabled.
Step 13: Encryption: enable the database master key
SQL Server data can now be encrypted. SQL Server 2005 first introduced the capability to encrypt individual columns within tables. SQL Server 2008 enhanced that feature to the database level. With Transparent Data Encryption (TDE), the entire database can now be encrypted.
We will not discuss how to migrate SQL Server 2008 encrypted databases, but touch upon what needs to be done if you have a SQL Server 2005 database at hand with encrypted columns in it.
Encryption works in a hierarchical manner in SQL Server. Each level of hierarchy represents a method of encrypting the level below it. The mechanism of encryption is implemented through a combination of keys and certificates. To decrypt a lower level, you will need to decrypt the higher level first.
At the very top of the encryption tree is the Service Master Key. Service master key is generated by the instance. It is used to encrypt the Database Master Key of the database where encryption is to take place. You create a database master key in each database where columns are to be encrypted. A copy of the database master key is saved inside the database and also in the master database.
Database master keys can be used to encrypt Certificates or Asymmetric Keys within the database. Certificates and asymmetric keys use public key – private key mechanisms for encryption and decryption. Certificates can be used to encrypt both data and symmetric keys. Like certificates, asymmetric keys can encrypt and decrypt symmetric keys and data.
At the bottom of the hierarchy are Symmetric Keys. Symmetric keys can be used to encrypt the data in a table column.
So in essence, a table’s column can be encrypted using a symmetric key, which in turn can be encrypted either by an asymmetric key or a certificate, which can in turn be encrypted by the database master key. Encrypting the database master key will be the service master key.
However, you do not have to use the whole hierarchy for encryption.  For example, a certificate alone can be used to encrypt the column’s data.
When restoring or attaching a database with encrypted columns in a different instance, the main thing to be aware of is that you will need to OPEN the database master key before decryption can happen. Restoring to the same instance is not a problem.
In the following example, I encrypted the CardNumber column of the Sales.CreditCard table in the AdventureWorks database. I had created a certificate for the encryption purpose and encrypted the certificate with the database master key.
USE AdventureWorks
GO

ALTER TABLE Sales.CreditCard ADD CardNumberEncrypted varbinary(500) NULL
GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘TestMasterKey’
GO

CREATE CERTIFICATE TestCertificate WITH SUBJECT = ‘Test Certificate’, START_DATE = ’08/26/2009′
GO


UPDATE Sales.CreditCard SET CardNumberEncrypted = EncryptByCert(Cert_ID(‘TestCertificate’), CardNumber)
GO


SELECT  TOP 100 CardNumber, CONVERT(nvarchar(25), DecryptByCert(Cert_ID(‘TestCertificate’), CardNumberEncrypted)) AS DecryptedCardNumber
FROM  Sales.CreditCard
GO

The result of the last SELECT query is shown below:
image006
Next, I restored the database in a separate SQL Server instance.  When I tried to run the same SELECT query in the newly restored database, the decryption does not work: the encrypted column is shown as NULL.
 image007

To enable decryption, I had to first OPEN the database master key using the same password. The OPEN command works for the same session only: the decryption capability does not work for other sessions or when the current session closes. So I had to ALTER the database master key to associate it with the current instance’s service master key:
OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘TestMasterKey’
GO

ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
GO
And then the decryption works.
So this is something to keep in mind when you are migrating databases with encrypted columns. You may want to backup the database master key from the source instance before migration.

Step 14: Consider high-availability requirements
High availability for SQL Server databases can come in few different forms: snapshot, log shipping, mirroring, or replication being the main ones apart from clustering.
You need to be aware of any existing high availability mechanisms for the source database before migrating it. A restored database does not maintain its original log shipping or mirroring configurations. If the migrated database needs to be log shipped or mirrored to the same secondary site, you will need to disable log shipping or mirroring for the old database first. Once the migration is complete, you will need to re-configure log shipping or mirroring.
The same holds true if your source database is part of a replication. When migrated to a new instance, you will need to recreate and reconfigure replication. If you are doing an in-place upgrade to a newer version of SQL Server like 2005, the setup application will automatically invoke the sp_vugrade_replication system procedure. This process upgrades schemas and metadata needed by the replication infrastructure.
Unlike other high availability methods, migrating from or to a clustered instance does not require any post installation configuration on the database part. You can take a backup of the database from a clustered instance and restore it in a non-clustered instance or vice versa.
Conclusion: Some final thoughts
We have tried to list a number of areas that the DBA should be looking at during a database’s migration. To help the process, answers to a number of questions should also be sought:
1.     How many databases are involved in the migration?
Obviously migrating ten different databases will entail more effort and planning than migrating only one database.
2.     What is the nature of the database(s) being migrated?
This should help you decide the instance where the database should be migrated to. For example, you would not want to install a data-mart in a high traffic OLTP system. Similarly, you will not probably be bothering about backups and disaster recovery if your migration is part of a regular refresh process in a test environment.
3.     If the database is already in production, what timeframe will be available for the migration?
Backup/restore or detach/attach will take some time and you would want to minimise this time as much as possible. Also, you may want to know what services and applications are accessing the source database and how they need to be stopped. For example if you are migrating a web based application, you will probably want to talk with your system administrators about shutting down Apache or IIS. Users will need to be made aware of any system unavailability.
4.     Do you need to consider the size and growth characteristics of the database once it has been migrated?
If the database has been growing fast and it needs to be migrated because the existing hardware resources are insufficient for the performance required, you will probably be changing the database file size, growth increments etc. once it has been migrated. You will also probably be tuning CPU and memory in the new instance.
5.     Has the destination server enough hardware resources (disk, CPU, memory, network capacity) to serve the new, migrated database? Will the database be competing for resources with any existing database application?
Management may want you to install a new instance for the database in a server where one or more instances could already be running. You need to be aware (and make management aware) that additional instances will consume additional hardware resources.
6.     If the destination server is a separate physical system, does it have an optimal disk configuration?
If your old database server had separate physical disks for data, log, backup and tempdb files, you probably would not want to migrate it to a server where only two physical drives of C:\ and D:\ are present.
7.      What other “peripheral” components will be involved with the migration of the database?
As we mentioned at the very beginning, migrating a database application can be a complex task. Nevertheless, you will still have to go through the process of migrating jobs, logins, custom error messages, file shares, reporting services solutions, SSIS or DTS packages etc.
8.     What user accounts will be accessing the migrated database? Will there be any new user accounts? Will there be any fewer accounts?
When migrating a development database to a production server, you may need to account for end-users who were not involved during the testing phase and whose user accounts did not exist in the development database.
9.     Will the migrated database have any special backup or disaster recovery requirements?
Business requirements may dictate a database to be backed up more frequently than other databases in the same instance.
10.  Is there any provision for post migration testing? Have test cases been developed?
You would want to ensure everything has been migrated and all configurations have been checked before switching the applications to the new database.
A. Unlocking a login
To unlock a SQL Server login, execute the following statement, replacing **** with the desired account password.
ALTER LOGIN [Mary5] WITH PASSWORD = ‘****’ UNLOCK ;
GO
To unlock a login without changing the password, turn the check policy off and then on again.
ALTER LOGIN [Mary5] WITH CHECK_POLICY = OFF;
ALTER LOGIN [Mary5] WITH CHECK_POLICY = ON;
GO
How to restart the server from network/local
Go to command prompt and use the below command
shutdown /r /t 10 /f /m SERVERNAME

How to shrink T-log file
Sometime, it looks impossible to shrink the Truncated Log file. Following code always shrinks the Truncated Log File to minimum size possible.
USE DatabaseName
GO
DBCC SHRINKFILE(<TransactionLogLogicalName>, 16)
Note: If you are not able to shrink the log file please use below if your database doesn’t require point in time recovery. This breaks the chain of the logs and in future you will not be able to restore point in time. You are recommended to take full back up right before using this query.
BACKUP LOG MyTestDB TO DISK = N’C:\MyTestDB_Adhoc_T_Log.trn’;
BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY
DBCC SHRINKFILE(<TransactionLogLogicalName>, 16)
GO
How to create Database using Query Analyser
CREATE DATABASE [MyDB]
ON PRIMARY
(
NAME = N’MyDB_data’,
FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\MyDB.mdf’ ,
SIZE = 2048KB ,
FILEGROWTH = 1024KB
)
LOG ON
(
NAME = N’MyDB_log’,
FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\MyDB_log.ldf’ ,
SIZE = 1024KB ,
FILEGROWTH = 10%
)
GO
How to perform point-in-time-recovery?
– Check Recovery Model
SELECT name,recovery_model_desc
FROM sys.databases
– Change Recovery Model
USE [master]
GO
ALTER DATABASE [AdventureWorks] SET RECOVERY FULL WITH NO_WAIT
/*After problem has occurred and you need a point in time recovery
1. You must check that your database is in full recovery model and valid FULL backup is already taken.
2. Create a transaction log backup by using graphical interface or just executing following tsql.
3.Restore your full database backup BUT with RESTORE WITH NORECROVERY option
BUT up to your desired time. In this example we will restore our Transaction Log file up to 3:30PM.
*/
– Take Full Backup
BACKUP DATABASE [AdventureWorks] TO DISK = N’E:\EmergencyFullBackup.bak’
WITH INIT,STATS = 10
GO
– Take TLog Backup
BACKUP LOG [AdventureWorks] TO DISK = N’E:\EmergencyLogBackup.trn’ WITH
NOFORMAT, NOINIT, NAME = N’AdventureWorks-Transaction Log  Backup’, SKIP,
NOREWIND, NOUNLOAD, STATS = 10
GO
– Restore FullBackup with NORECOVERY Option
 RESTORE DATABASE AdventureWorks2008R2
 FROM DISK = ‘E:\EmergencyFullBackup.bak’
 WITH NORECOVERY;
GO
– Restore TLog Backup upto your desired time
RESTORE LOG [AdventureWorks]
FROM DISK = ‘E:\EmergencyLogBackup.trn’
WITH RECOVERY,
STOPAT = ‘Jan 03, 2011 03:30:00 PM’
GO
Thanks,

Thursday 3 October 2013

Listed below are queries / stored procedure calls that can be used to get information on MS SQL Server objects such as tables, views, indexes, procedures, functions, triggers, schemas, and users.

Tables and Views

To get all tables, views, and system tables, the following SQL Server system stored procedure can be executed. 

exec sp_tables '%' 

To filter by database for tables only, for example master: 

exec sp_tables '%', '%', 'master', "'TABLE'" 

To filter by database and owner / schema for tables only, for example, master and dbo:

exec sp_tables '%', 'dbo', 'master', "'TABLE'" 

To return only views, replace "'TABLE'" with "'VIEW'". To return only system tables, replace "'TABLE'" with "'SYSTEM TABLE'".

Schemas / Owners

Here are two examples for queries to get schema / owner information. 

select distinct SCHEMA_NAME from INFORMATION_SCHEMA.SCHEMATA order by SCHEMA_NAME 

select name from dbo.sysusers where islogin = 1 order by name

Procedures

This is a query to get all MS SQL Server procedures. 

exec sp_stored_procedures '%' 

The query can be filtered to return procedures for specific schemas / owners and databases by appending more information onto the procedure call, such as the following: 

exec sp_stored_procedures '%', 'dbo', 'master'

Procedure Columns

This is a system stored procedure call to get the columns in a SQL Server procedure. 

exec sp_sproc_columns 'get_employee_names', 'dbo', 'sample'

Functions

This is a query to get all MS SQL Server functions. 

select ROUTINE_NAME from INFORMATION_SCHEMA.ROUTINES where upper(ROUTINE_TYPE) = 'FUNCTION'

Triggers

This is a query to get all MS SQL Server triggers. 

select * from sysobjects where type = 'TR' 

The query can be filtered to return triggers for a specific owner by appending a user_name call onto the where clause to the query. 

select * from sysobjects where type = 'TR' and user_name(sysobjects.uid) = 'dbo'

Indexes

This is a query to get MS SQL Server indexes for a particular table. In this example, the table used is employee. 

exec sp_helpindex 'employee'

Friday 27 September 2013

Backup History Scripts

Database Backups for all databases For Previous Week :

SELECT
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
   msdb.dbo.backupset.database_name,
   msdb.dbo.backupset.backup_start_date,
   msdb.dbo.backupset.backup_finish_date,
   msdb.dbo.backupset.expiration_date,
   CASE msdb..backupset.type
       WHEN 'D' THEN 'Database'
       WHEN 'L' THEN 'Log'
   END AS backup_type,
   msdb.dbo.backupset.backup_size,
   msdb.dbo.backupmediafamily.logical_device_name,
   msdb.dbo.backupmediafamily.physical_device_name, 
   msdb.dbo.backupset.name AS backupset_name,
   msdb.dbo.backupset.description
FROM   msdb.dbo.backupmediafamily
   INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
WHERE  (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7)
ORDER BY
   msdb.dbo.backupset.database_name,
   msdb.dbo.backupset.backup_finish_date

----24 hour full backup
SELECT database_name, backup_start_date, type, *
FROM msdb.dbo.backupset
WHERE backup_start_date BETWEEN DATEADD(hh, -24, GETDATE()) AND GETDATE()
AND Type = 'D'
ORDER BY backup_set_id DESC
GO

script 2.
:--------------------------------------------------------------------------------------------
--Most Recent Database Backup for Each Database - Detailed
-------------------------------------------------------------------------------------------
SELECT
   A.[Server],
   B.database_name,
   A.last_db_backup_date,
   B.backup_start_date,
   B.expiration_date,
   B.backup_size,
   B.logical_device_name,
   B.physical_device_name, 
   B.backupset_name,
   B.description
FROM
   (
   SELECT 
       CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
       msdb.dbo.backupset.database_name,
       MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date
   FROM    msdb.dbo.backupmediafamily
       INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
   WHERE   msdb..backupset.type = 'D'
   GROUP BY
       msdb.dbo.backupset.database_name
   ) AS A
  
   LEFT JOIN

   (
   SELECT 
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
   msdb.dbo.backupset.database_name,
   msdb.dbo.backupset.backup_start_date,
   msdb.dbo.backupset.backup_finish_date,
   msdb.dbo.backupset.expiration_date,
   msdb.dbo.backupset.backup_size,
   msdb.dbo.backupmediafamily.logical_device_name,
   msdb.dbo.backupmediafamily.physical_device_name, 
   msdb.dbo.backupset.name AS backupset_name,
   msdb.dbo.backupset.description
FROM   msdb.dbo.backupmediafamily
   INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
WHERE  msdb..backupset.type = 'D'
   ) AS B
   ON A.[server] = B.[server] AND A.[database_name] = B.[database_name] AND A.[last_db_backup_date] = B.[backup_finish_date]
ORDER BY
   A.database_name



script :-
------------------------------------------------------------------------------------------
--Databases Missing a Data (aka Full) Back-Up Within Past 24 Hours
-------------------------------------------------------------------------------------------
--Databases with data backup over 24 hours old
SELECT
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
   msdb.dbo.backupset.database_name,
   MAX(msdb.dbo.backupset.backup_finish_date) AS last_db_backup_date,
   DATEDIFF(hh, MAX(msdb.dbo.backupset.backup_finish_date), GETDATE()) AS [Backup Age (Hours)]
FROM    msdb.dbo.backupset
WHERE     msdb.dbo.backupset.type = 'D'
GROUP BY msdb.dbo.backupset.database_name
HAVING      (MAX(msdb.dbo.backupset.backup_finish_date) < DATEADD(hh, - 24, GETDATE()))

UNION

--Databases without any backup history
SELECT    
   CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
   master.dbo.sysdatabases.NAME AS database_name,
   NULL AS [Last Data Backup Date],
   9999 AS [Backup Age (Hours)]
FROM
   master.dbo.sysdatabases LEFT JOIN msdb.dbo.backupset
       ON master.dbo.sysdatabases.name  = msdb.dbo.backupset.database_name
WHERE msdb.dbo.backupset.database_name IS NULL AND master.dbo.sysdatabases.name <> 'tempdb'
ORDER BY
   msdb.dbo.backupset.database_name



--------------------------------------

Percent....backup completed

SELECT percent_complete, *
FROM sys.dm_exec_requests
WHERE command In ( 'RESTORE DATABASE', 'BACKUP DATABASE' )

-------------------------------------------------------
last backup
SELECT   d.name,
         MAX(b.backup_finish_date) AS backup_finish_date
FROM     master.sys.sysdatabases d
         LEFT OUTER JOIN msdb..backupset b
         ON       b.database_name = d.name
         AND      b.type          = 'L'
GROUP BY d.name
ORDER BY backup_finish_date DESC