Wednesday 9 October 2013


Migrating SQL Server Databases – The DBA’s Checklist



It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios:
1.     A new  database application is rolled out in a production server from the development or test environment
2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes
3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle
4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration
5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server
6.     A database has to be restored from a backup file provided by a third party application vendor
7.     A backup of the database is restored in the same or different instance for disaster recovery
8.     A database needs to be migrated within the same instance:
a.     Files are moved from direct attached storage to storage area network
b.    The same database is copied under a different name for another application
Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself.
Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination.
Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved.
Step 1:  Make a copy of data and log files when attaching and detaching
When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following:
Server: Msg 602, Level 21, State 50, Line 1
Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes.
Connection Broken
If you try to backup the attached database and restore it in the source, it will still fail.
Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here.
When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases.

Step 2: Change database compatibility level
Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic.
Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.
image002
Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).
Step 3: Run DBCC UPDATEUSAGE
Run the DBCC UPDATEUSAGE command against the migrated database when upgrading to a newer version of SQL Server. This is particularly important if you are migrating from a pre-SQL Server 2005 version. Up until SQL 2005, the table or index row counts and the page counts for data, leaf, and reserved pages could become out of synch with time in the database system tables. DBCC UPDATEUSAGE command corrects these inaccuracies and ensures the counts are updated.
To run the DBCC UPDATEUSAGE command against your target database, execute the following:
DBCC UPDATEUSAGE(database_name) WITH COUNT_ROWS

If there are no inaccuracies found, the command does not return any data. If there are inaccuracies found and DBCC UPDATE USAGE fixes these inaccuracies, the detailed report is shown in the output.
You should run the DBCC UPDATEUSAGE command before you run DBCC CHECKDB (discussed next). From SQL Server 2005, the DBCC CHECKDB command has been modified to trap errors related to incorrect row or page counts. However If the counts are wrong in the source database and you run DBCC CHECKDB against your newly restored/attached database, you will get a warning message like the following:
Msg 2508, Level 16, State 3, Line 1
The In-row data RSVD page count for object “table_name“, index ID 0, partition ID 58037252456448, alloc unit ID 58037252456448 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.

Step 4: Run DBCC CHECKDB
DBCC CHECKDB is a commonly used command that can check the allocation, structural and logical integrity of a database and its objects. Although it can be used to fix minor issues with the built-in REPAIR options, they should not be used the first time you are running it against your migrated database. This is because you want to know if the database you just installed is structurally okay. You would execute a command like the following:
DBCC CHECKDB(database_name) WITH ALL_ERRORMSGS
Starting from SQL Server 2005, DBCC CHECKDB has another option: DATA_PURITY.  This option can detect field values that are invalid or out-of-range. An example of such an anomaly would be a datetime field with values outside the acceptable range. For databases upgraded from pre-SQL Server 2005 versions, DBCC CHECKDB does not automatically check column values until the command has been run with the DATA_PURITY option error free. After that, you do not need to specify this option as DBCC CHECKDB will automatically check for entity integrity in the field values.
If everything is okay, the command will run successfully and you will see something like the following at the end of the output:
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘database_name‘.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Step 5: Update statistics
image004
The AUTO_CREATE_STATISTICS option ensures SQL Server automatically generates any missing statistics during the query optimisation phase. As statistics can become out-of-date and stale over time, setting the AUTO_UPDATE_STATISTICS option to ON ensures the query optimizer will automatically update any out-of-date stats during optimisation.
I would usually keep the “Auto Update Statistics Asynchronously” option to its default value of “false”. This option – introduced in SQL Server 2005 and also present in SQL Server 2008 – controls how SQL Server will behave if it finds any out-of-date statistics during the optimisation phase. Setting this option to “true” will let the query run with the old statistics. It will then update the stats outside the query.  This ensures subsequent runs of the query will use the new statistics. When this property is set to false, query optimiser will wait till new statistics are generated.
Step 5: Update statistics
It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database:
sp_updatestats
The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command.
Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS.
Step 6: Set database options
You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases.
Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE.
Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.
Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents.
When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.
 Microsoft suggests using the CHECKSUM page verify option as this offers more protection.

Step 7: Map database users to logins
A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic.
You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though.
The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login.
If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following:
sp_change_users_login
        @Action = ‘Update_One’,
       
@UserNamePattern = ‘bob’,
       
@LoginName = ‘bob’
If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account.
Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.
Step 8: Recovery model considerations and backup requirements
If the database you have migrated or installed can be classed as “production”, ensure it is included in your backup plan.
One of things the recovery model of a database dictates is the type of backups that can be performed against it. For example, transaction log backups cannot be done against a database set to simple recovery mode. The restored or attached database will retain its original recovery mode, so you will need to check it once the database has been migrated.
If your server has a maintenance plan set up that backs up the transaction logs of “all user databases”, the log backup step will fail if the new database is in simple recovery mode. To get around, you will either need to modify the maintenance plan to exclude the new database or change the database’s recovery model to full.
Even if the restored database is in full recovery mode, you will need to take a full backup of it if transaction log backups are going to be taken later. Unless the new database has been fully backed up once, log backups will not have a base to start from and will fail. In such cases you will have a message like the following:
Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

Step 9: Changing database code and associated configurations
This step will probably be required if your database is being migrated from an older version of SQL to a newer one. The DBA will be working with developers and system architects to identify commands and keywords that are no longer supported or those that can be replaced with newer features. Unlike most of the steps described above, this is not a simple, one off procedure but can comprise of a series of iterations.
For example, if the database is migrated from SQL Server 2000 to 2005 or 2008, code that uses xp_sendmail may need to be changed to use the newer database mail feature with the sp_send_dbmail stored procedure.  Likewise, isql and osql commands in stored procedures should be changed to make use of the sqlcmd utility.
Note that some of these features may require the DBA to make server or database level changes. For example, the Surface Are Configuration tool (or facet in SQL 2008) may need to be used to enable xp_cmdshell, CLR code, database mail or older style OPENROWSET queries.
Developers and DBAs will also need to look for linked server references within the code. New linked servers may need to be created and the code accordingly modified.
Not all required configuration changes may be evident from the code either. Databases tables sometimes store application specific metadata such as connection strings or server names. DBAs and developers will need to modify these data to reflect any changes.
A database property called TRUSTWORTHY controls whether CLR assemblies marked with EXTERNAL_ACCESS or UNSAFE mode will be able to access resources outside the databases. This property also dictates whether functions and procedures that use impersonation via EXECUTE AS clause can access resources outside the database.
By default, SQL Server keeps the TRUSTWORTHY property of a database to OFF. Even if your source database has this property set to ON, it will automatically revert to OFF when you detach and reattach the database. This is to ensure that no malicious user can detach and attach a user database and add malicious code there. To allow the newly attached database’s code to access external resources, the TRUSTWORTHY property needs to be set explicitly:
ALTER DATABASE database_name SET TRUSTWORTHY ON
Also there are additional requirements for CLR assemblies to run successfully. The DBA will probably need to work closely with the developers in this area.
Step 10: Full-text catalogs and full-text indexing
This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems.
Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points.
First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way:
Msg 9954, Level 16, State 2, Line 1
The Full-Text Service (msftesql) is disabled. The system administrator must enable this service.
This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager.
Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it.
Msg 7624, Level 16, State 1, Line 1
Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog.
A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server.
Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination.

Step 11: Database collation considerations
Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command:
ALTER DATABASE database_name COLLATE collation_name
You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected.
And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following:
Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘.
That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other.
Step 12: Service Broker considerations

Service broker is a new feature available from SQL Server 2005 that allows database applications to take advantage of asynchronous messaging between senders and recipients.  When restoring or attaching a broker enabled database, be mindful of two things: broker identifier and message delivery status.
Each SQL Server 2005 database has a unique “broker identifier” that sets it apart from any other database in the network. The broker identifier is a GUID and is assigned by SQL Server every time a new database is created. You can have a look at the broker identifier of the databases in an instance by executing the following command:
SELECT name, service_broker_guid FROM sys.databases
SQL Server assigns this identifier so that all messages for a service broker conversation are rightfully delivered to the correct database. If you think about it, this is required: if you want to talk to a person in a meeting, you would want to address him by name. And if more than one person around the table has the same name, you would want to ensure only the right person gets the message. So you will follow some method:  looking at the person and perhaps using a polite gesture of hand to mean you are referring to him. Likewise, SQL Server also wants to ensure messages know what their destination database is. Hence this unique way of identifying a database is used.
Apart from assigning the broker identifier, the database can be marked as “broker enabled”. When broker enabled, service broker ensures messages are delivered from the database and oncoming messages can also reach the services defined within the database.
If you want to see which databases are broker enabled in the current instance, you can execute a query like the following:
SELECT name, is_broker_enabled FROM sys.databases
However, what happens when you are restoring or attaching a database in an instance that has already another database with the same service broker GUID? This is a valid situation – you can be restoring the same database in the same or different instance multiple times under different names. How does SQL Server know what database oncoming messages will be routed to?
The answer is SQL Server will disable message delivery (i.e. disable broker) in such cases. When you restore or attach a database, the service broker identifier remains intact.  The message delivery status is also preserved if there are no other databases in the instance with the same identifier. However, if you restore or attach the same database again, the message delivery option will be turned off in the newly attached / restored database. When message delivery option is turned off, no messages from the database will be going out and service broker will not recognise the services defined within the database as eligible for receiving messages. This is a bit like being suspended out of a game indefinitely.
As an example, I restored three copies of the AdventureWorks database in an instance. I then executed the following command:
SELECT name, is_broker_enabled, service_broker_guid
FROM  sys.databases
WHERE  name LIKE ‘Ad%’
The result looks like the following:
image005
As you can see, all three databases have the same service broker GUID, but only one database has the broker enabled.
You can enable a database’s message delivery status by using the following command:
ALTER DATABASE database_name SET ENABLE_BROKER
However, when you try to enable service broker for a database in an instance where another database already exists with the same service broker GUID and is broker-enabled, you will get a message like the following:
Msg 9772, Level 16, State 1, Line 1
The Service Broker in database “database_name” cannot be enabled because there is already an enabled Service Broker with the same ID.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
This is why you need to be careful when restoring or attaching a service broker database application so that only the correct database can be made broker-enabled.
Step 13: Encryption: enable the database master key
SQL Server data can now be encrypted. SQL Server 2005 first introduced the capability to encrypt individual columns within tables. SQL Server 2008 enhanced that feature to the database level. With Transparent Data Encryption (TDE), the entire database can now be encrypted.
We will not discuss how to migrate SQL Server 2008 encrypted databases, but touch upon what needs to be done if you have a SQL Server 2005 database at hand with encrypted columns in it.
Encryption works in a hierarchical manner in SQL Server. Each level of hierarchy represents a method of encrypting the level below it. The mechanism of encryption is implemented through a combination of keys and certificates. To decrypt a lower level, you will need to decrypt the higher level first.
At the very top of the encryption tree is the Service Master Key. Service master key is generated by the instance. It is used to encrypt the Database Master Key of the database where encryption is to take place. You create a database master key in each database where columns are to be encrypted. A copy of the database master key is saved inside the database and also in the master database.
Database master keys can be used to encrypt Certificates or Asymmetric Keys within the database. Certificates and asymmetric keys use public key – private key mechanisms for encryption and decryption. Certificates can be used to encrypt both data and symmetric keys. Like certificates, asymmetric keys can encrypt and decrypt symmetric keys and data.
At the bottom of the hierarchy are Symmetric Keys. Symmetric keys can be used to encrypt the data in a table column.
So in essence, a table’s column can be encrypted using a symmetric key, which in turn can be encrypted either by an asymmetric key or a certificate, which can in turn be encrypted by the database master key. Encrypting the database master key will be the service master key.
However, you do not have to use the whole hierarchy for encryption.  For example, a certificate alone can be used to encrypt the column’s data.
When restoring or attaching a database with encrypted columns in a different instance, the main thing to be aware of is that you will need to OPEN the database master key before decryption can happen. Restoring to the same instance is not a problem.
In the following example, I encrypted the CardNumber column of the Sales.CreditCard table in the AdventureWorks database. I had created a certificate for the encryption purpose and encrypted the certificate with the database master key.
USE AdventureWorks
GO

ALTER TABLE Sales.CreditCard ADD CardNumberEncrypted varbinary(500) NULL
GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘TestMasterKey’
GO

CREATE CERTIFICATE TestCertificate WITH SUBJECT = ‘Test Certificate’, START_DATE = ’08/26/2009′
GO


UPDATE Sales.CreditCard SET CardNumberEncrypted = EncryptByCert(Cert_ID(‘TestCertificate’), CardNumber)
GO


SELECT  TOP 100 CardNumber, CONVERT(nvarchar(25), DecryptByCert(Cert_ID(‘TestCertificate’), CardNumberEncrypted)) AS DecryptedCardNumber
FROM  Sales.CreditCard
GO

The result of the last SELECT query is shown below:
image006
Next, I restored the database in a separate SQL Server instance.  When I tried to run the same SELECT query in the newly restored database, the decryption does not work: the encrypted column is shown as NULL.
 image007

To enable decryption, I had to first OPEN the database master key using the same password. The OPEN command works for the same session only: the decryption capability does not work for other sessions or when the current session closes. So I had to ALTER the database master key to associate it with the current instance’s service master key:
OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘TestMasterKey’
GO

ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
GO
And then the decryption works.
So this is something to keep in mind when you are migrating databases with encrypted columns. You may want to backup the database master key from the source instance before migration.

Step 14: Consider high-availability requirements
High availability for SQL Server databases can come in few different forms: snapshot, log shipping, mirroring, or replication being the main ones apart from clustering.
You need to be aware of any existing high availability mechanisms for the source database before migrating it. A restored database does not maintain its original log shipping or mirroring configurations. If the migrated database needs to be log shipped or mirrored to the same secondary site, you will need to disable log shipping or mirroring for the old database first. Once the migration is complete, you will need to re-configure log shipping or mirroring.
The same holds true if your source database is part of a replication. When migrated to a new instance, you will need to recreate and reconfigure replication. If you are doing an in-place upgrade to a newer version of SQL Server like 2005, the setup application will automatically invoke the sp_vugrade_replication system procedure. This process upgrades schemas and metadata needed by the replication infrastructure.
Unlike other high availability methods, migrating from or to a clustered instance does not require any post installation configuration on the database part. You can take a backup of the database from a clustered instance and restore it in a non-clustered instance or vice versa.
Conclusion: Some final thoughts
We have tried to list a number of areas that the DBA should be looking at during a database’s migration. To help the process, answers to a number of questions should also be sought:
1.     How many databases are involved in the migration?
Obviously migrating ten different databases will entail more effort and planning than migrating only one database.
2.     What is the nature of the database(s) being migrated?
This should help you decide the instance where the database should be migrated to. For example, you would not want to install a data-mart in a high traffic OLTP system. Similarly, you will not probably be bothering about backups and disaster recovery if your migration is part of a regular refresh process in a test environment.
3.     If the database is already in production, what timeframe will be available for the migration?
Backup/restore or detach/attach will take some time and you would want to minimise this time as much as possible. Also, you may want to know what services and applications are accessing the source database and how they need to be stopped. For example if you are migrating a web based application, you will probably want to talk with your system administrators about shutting down Apache or IIS. Users will need to be made aware of any system unavailability.
4.     Do you need to consider the size and growth characteristics of the database once it has been migrated?
If the database has been growing fast and it needs to be migrated because the existing hardware resources are insufficient for the performance required, you will probably be changing the database file size, growth increments etc. once it has been migrated. You will also probably be tuning CPU and memory in the new instance.
5.     Has the destination server enough hardware resources (disk, CPU, memory, network capacity) to serve the new, migrated database? Will the database be competing for resources with any existing database application?
Management may want you to install a new instance for the database in a server where one or more instances could already be running. You need to be aware (and make management aware) that additional instances will consume additional hardware resources.
6.     If the destination server is a separate physical system, does it have an optimal disk configuration?
If your old database server had separate physical disks for data, log, backup and tempdb files, you probably would not want to migrate it to a server where only two physical drives of C:\ and D:\ are present.
7.      What other “peripheral” components will be involved with the migration of the database?
As we mentioned at the very beginning, migrating a database application can be a complex task. Nevertheless, you will still have to go through the process of migrating jobs, logins, custom error messages, file shares, reporting services solutions, SSIS or DTS packages etc.
8.     What user accounts will be accessing the migrated database? Will there be any new user accounts? Will there be any fewer accounts?
When migrating a development database to a production server, you may need to account for end-users who were not involved during the testing phase and whose user accounts did not exist in the development database.
9.     Will the migrated database have any special backup or disaster recovery requirements?
Business requirements may dictate a database to be backed up more frequently than other databases in the same instance.
10.  Is there any provision for post migration testing? Have test cases been developed?
You would want to ensure everything has been migrated and all configurations have been checked before switching the applications to the new database.

1 comment:

  1. Very informative blog post... Here I found valuable information on SQL server disaster recovery. Thanks for sharing valuable information.

    ReplyDelete