What are “Dirty reads”?
This behavior occurs when a process reads uncommitted data. If one
process has changed data but not yet committed the change, another
process reading the data will read it in an inconsistent state. For
example, say that ClerkA has updated the old value of 25 widgets to 75,
but before he commits, a salesperson looks at the current value of 75
and commits to sending 60 widgets to a customer the following day. If
ClerkA then realizes that the widgets are defective and sends them back
to the manufacturer, the salesperson will have done a dirty read and
taken action based on uncommitted data
What are “Unrepeatable reads”?
This behavior is also called inconsistent analysis. A read is
non-repeatable if a process might get different values when reading the
same resource in two separate reads within the same transaction. This
can happen when another process changes the data in between the reads
that the first process is doing. In the receiving room example, suppose
that a manager comes in to do a spot check of the current inventory. She
walks up to each clerk, asking the total number of widgets received
that today, and adding the numbers on her calculator. When she's done,
she wants to double-check the result, so she goes back to the first
clerk. However, if ClerkA received more widgets between the manager's
first and second inquiries, the total will be different each time and
the reads are non-repeatable
What are “Phantom rows”?
This behavior occurs when membership in a set changes. It can happen
only when a query with a predicatesuch as WHERE count_of_widgets <
10is involved. A phantom occurs if two SELECT operations using the same
predicate in the same transaction return a different number of rows. For
example, let's say that our manager is still doing spot checks of
inventory. This time, she goes around the receiving room and notes which
clerks have fewer than 10 widgets. After she completes the list, she
goes back around to offer advice to everyone with a low total. However,
if during her first walk-through a clerk with fewer than 10 widgets
returned from a break but was not spotted by the manager, that clerk
will not be on the manager's list even though he meets the criteria in
the predicate. This additional clerk (or row) is considered to be a
phantom
What are “Lost Updates”?
This
behavior occurs when two processes read the same data and both
manipulate the data, changing its value, and then both try to update the
original data to the new value. The second process might completely
overwrite the first update. For example, suppose that two clerks in a
receiving room are receiving parts and adding the new shipments to the
inventory database. ClerkA and ClerkB both receive shipments of widgets.
They both check the current inventory and see that 25 widgets are
currently in stock. ClerkA's shipment has 50 widgets, so he adds 50 to
25 and updates the current value to 75. ClerkB's shipment has 20
widgets, so she adds 20 to the value of 25 that she originally read and
updates the current value to 45, completely overriding the 50 new
widgets that ClerkA processed. ClerkA's update is lost
What are different levels of granularity of locking resources?
SQL
Server can lock user data resources (not system resources, which are
protected with latches) at the table, page, or row level. It also locks
index keys and ranges of index keys. Keep in mind that if the table has a
clustered index, the data rows are at the leaf level of the clustered
index and they are locked with key locks instead of row locks.
What are different types of Locks in SQL Server?
Lock
Modes SQL Server uses several locking modes, including shared locks,
exclusive locks, update locks, and intent locks, plus variations on
these. It is the mode of the lock that determines whether a concurrently
requested lock is compatible. We'll see a chart illustrating the lock
compatibility matrix at the end of this section. Shared Locks Shared
locks are acquired automatically by SQL Server when data is read.
Shared locks
can be
held on a table, a page, an index key, or an individual row. Many
processes can hold shared locks on the same data, but no process can
acquire an exclusive lock on data that has a shared lock on it (unless
the process requesting the exclusive lock is the same process as the one
holding the shared lock). Normally, shared locks are released as soon
as the data has been read, but you can change this by using query hints
or a different transaction isolation level.
Exclusive Locks
SQL
Server automatically acquires exclusive locks on data when the data is
modified by an insert, update, or delete operation. Only one process at a
time can hold an exclusive lock on a particular data resource; in fact,
as you'll see when we discuss lock compatibility, no locks of any kind
can be acquired by a process if another process has the requested data
resource exclusively locked. Exclusive locks are held until the end of
the transaction. This means the changed data is normally not available
to any other process until the current transaction commits or rolls
back. Other processes can decide to read exclusively locked data by
using query hints.
Update Locks
Update
locks are really not a separate kind of lock; they are a hybrid of
shared and exclusive locks. They are acquired when SQL Server executes a
data modification operation but first needs to search the table to find
the resource that will be modified. Using query hints, a process can
specifically request update locks, and in that case the update locks
prevent the conversion deadlock situation
Intent Locks
Intent
locks are not really a separate mode of locking; they are a qualifier
to the modes previously discussed. In other words, you can have intent
shared locks, intent exclusive locks, and even intent update locks.
Because SQL Server can acquire locks at different levels of granularity,
a mechanism is needed to indicate that a component of a resource is
already locked. For example, if one process tries to lock a table, SQL
Server needs a way to determine whether a row (or a page) of that table
is already locked. Intent locks serve this purpose. We'll discuss them
in more detail when we look at lock granularity.
What are different Isolation levels in SQL Server?
Uncommitted Read In Uncommitted Read isolation,
all the behaviors described previously except lost updates are
possible. Your queries can read uncommitted data, and both
non-repeatable reads and phantoms are possible. Uncommitted read
is implemented by allowing your read operations to not take any locks,
and because SQL Server isn't trying to acquire locks, it won't be
blocked by conflicting locks acquired by other processes. Your process
will be able to read data that another process has modified but not yet
committed. Although this scenario isn't usually the ideal option, with
Uncommitted Read you can't get stuck waiting for a lock, and your read
operations don't acquire any locks that might affect other processes
that are reading or writing data. Read Committed SQL Server 2005
supports two varieties of Read Committed isolation, which is the default
isolation level. This isolation level can be either optimistic or
pessimistic, depending on the database setting READ_COMMITTED_SNAPSHOT.
Because the default for the database option is off, the default for this
isolation level is to use pessimistic concurrency control. Unless
indicated otherwise, when I refer to the Read Committed isolation level,
I will be referring to both variations of this isolation level. I'll
refer to the pessimistic implementation as Read Committed (locking), and
I'll refer to the optimistic implementation as Read Committed
(snapshot). Repeatable Read Repeatable Read is a pessimistic isolation
level. It adds to the properties of Committed Read by ensuring that if a
transaction revisits data or a query is reissued, the data will not
have changed. In other words, issuing the same query twice within a
transaction will not pick up any changes to data values made by another
user's transaction. However, the Repeatable Read isolation level does
allow phantom rows to appear. Serializable Serializable is also a
pessimistic isolation level. The Serializable isolation level adds to
the properties of Repeatable Read by ensuring that if a query is
reissued, rows will not have been added in the interim. In other words,
phantoms will not appear if the same query is issued twice within a
transaction. Serializable is therefore the strongest of the pessimistic
isolation levels because it prevents all the possible undesirable
behaviors discussed earlierthat is, it does not allow uncommitted reads,
non-repeatable reads, or phantoms, and it also guarantees that your
transactions can be run serially. Snapshot Snapshot isolation is an
optimistic isolation level. Like Read Committed (snapshot), it allows
processes to read older versions of committed data if the current
version is locked. The difference between Snapshot and Read Committed
(snapshot) has to do with how old the older versions have to be. (We'll
see the details in the section on row versioning.) Although the
behaviors prevented by Snapshot isolation are the same as those
prevented by Serializable, Snapshot is not truly a serializable
isolation level. With Snapshot isolation, it is possible to have two
transactions executing simultaneously that give us a result that is not
possible in any serial execution. Table 8-1 shows an example of two
simultaneous transactions. If they run in parallel, they will end up
switching the price of two books in the titles table in the pubs
database. However, there is no serial execution that would end up
switching the values, whether we run Transaction 1 and then Transaction
2, or run Transaction 2 and then Transaction 1. Either serial order ends
up with the two books having the same price.
9) If you are using COM+ what “Isolation” level is set by default?
What are “Lock” hints?
Transact-SQL
syntax allows you to specify locking hints for individual tables when
they are referenced in SELECT, INSERT, UPDATE, and DELETE statements.
The hints tell SQL Server the type of locking or row versioning to use
for a particular table in a particular query. Because these hints are
specified in a FROM clause, they are called table-level hints. Books
Online lists other table-level hints besides locking hints, but the vast
majority of them affect locking behavior. They should be used only when
you absolutely need finer control over locking at the object level than
what is provided by your session's isolation level. The SQL Server
locking hints can override the current transaction isolation level for
the session. In this section, I will mention only some of the locking
hints that you might need in order to obtain the desired concurrency
behavior.
What is a “Deadlock” ?
A
deadlock occurs when two processes are waiting for a resource and
neither process can advance because the other process prevents it from
getting the resource. A true deadlock is a catch-22 in which, without
intervention, neither process can ever make progress. When a deadlock
occurs, SQL Server intervenes automatically. SQL Server 2005 produces
more detailed deadlock diagnostics than previous versions. In addition,
deadlocks can be detected for more types of resources. In this section,
I'll refer mainly to deadlocks acquired due to conflicting locks,
although deadlocks can also be detected on worker threads, memory,
parallel query resources, and multiple active result sets (MARS)
resources.
What are the steps you can take to avoid “Deadlocks” ?
You
should always try to have a standard protocol for the order in which
processes access tables. If you know that the processes might need to
update the row after reading it, they should initially request an update
lock, not a shared lock. If both processes request an update lock
rather than a shared lock, the process that is granted an update lock is
assured that the lock can later be promoted to an exclusive lock. The
other process requesting an update lock has to wait. The use of an
update lock serializes the requests for an exclusive lock. Other
processes needing only to read the data can still get their shared locks
and read. Because the holder of the update lock is guaranteed an
exclusive lock, the deadlock is avoided. In many systems, deadlocks
cannot be completely avoided, but if the application handles the
deadlock appropriately, the impact on any users involved, and on the
rest of the system, should be minimal. (Appropriate handling implies
that when an error 1205 occurs, the application resubmits the batch,
which will most likely succeed on a second try. Once one process is
killed, its transaction is aborted, and its locks are rolled back, the
other process involved in the deadlock can finish its work and release
its locks, so the environment will not be conducive to another
deadlock.) Although you might not be able to completely avoid deadlocks,
you can minimize their occurrence. For example, you should write your
applications so that your processes hold locks for a minimal amount of
time; in that way, other processes won't have to wait too long for locks
to be released. Although you don't usually invoke locking directly, you
can influence locking by keeping transactions as short as possible. For
example, don't ask for user input in the middle of a transaction.
Instead, get the input first and then quickly perform the transaction