Tuesday 24 September 2013

Basic Interview questions


  • ---How many types of system privileges are there, Can we add or delete privileges?
       There are over 80 distinct system privileges. Each system privilege allows a user to perform a   
        particular database operation or class of database operations.

    Such as:- 1. ANALYZE ANY
    2. CREATE CLUSTER
    3. CREATE ANY CLUSTER
    4. ALTER ANY CLUSTER
    5. DROP ANY CLUSTER
    6. ALTER DATABASE
    7. CREATE ANY INDEX
    8. ALTER ANY INDEX
    9. DROP ANY INDEX
    10. GRANT ANY PRIVILEGE
    11. CREATE PROCEDURE
    12. CREATE ANY PROCEDURE
    13. ALTER ANY PROCEDURE
    14. DROP ANY PROCEDURE
    15. EXECUTE ANY PROCEDURE
    16. CREATE TABLE
    17. CREATE ANY TABLE
    18. ALTER ANY TABLE
    19. BACKUP ANY TABLE
    20. DROP ANY TABLE
    21. LOCK ANY TABLE
    22. COMMENT ANY TABLE
    23. SELECT ANY TABLE
    24. INSERT ANY TABLE
    25. UPDATE ANY TABLE
    26. DELETE ANY TABLE
    etc....................

    Granting,revoking of privilages is possible but adding new privileges or deleting existing privileges is not possible.

  • What are the steps you will take, if you are tasked with securing an SQL Server?
        Again this is another open ended question.
        Here are some things you could talk about:
    ----Preferring NT authentication, using server, database and application roles to control access to
         the data,
    ----securing the physical database files using NTFS permissions, using an unguessable SA
         password, restricting physical access to the SQL Server, renaming the Administrator account
         on the SQL Server computer, disabling the Guest account, enabling auditing, using
          multiprotocol encryption, setting up SSL, setting up firewalls,
      ----isolating SQL Server from the web server etc.
  • How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables?
    One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships. One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships. Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.
  • What is denormalization and when would you go for it?
    As the name indicates, denormalization is the reverse process of normalization. It’s the controlled introduction of redundancy in to the database design. It helps improve the query performance as the number of joins could be reduced.
  • What are different normalization forms?

    1NF: Eliminate Repeating Groups
    Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.

    2NF: Eliminate Redundant Data
    If an attribute depends on only part of a multi-valued key, remove it to a separate table.

    3NF: Eliminate Columns Not Dependent On Key
    If attributes do not contribute to a description of the key, remove them to a separate table. All
    attributes must be directly dependent on the primary key

    BCNF: Boyce-Codd Normal Form
    If there are non-trivial dependencies between candidate key attributes, separate them out into distinct
    tables.

    4NF: Isolate Independent Multiple Relationships
    No table may contain two or more 1:n or n:m relationships that are not directly related.

    5NF: Isolate Semantically Related Multiple Relationships
    There may be practical constrains on information that justify separating logically related many-to-many
    relationships.

    ONF: Optimal Normal Form
    A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.

    DKNF: Domain-Key Normal Form
    A model free from all modification anomalies.
    Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first
    fulfill all the criteria of a 2NF and 1NF database. 
  • What is normalization?
    Database normalization is a data design and organization process applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data
    to minimize redundancy. Normalization usually involves dividing a database into two or more tables and
    defining relationships between the tables. The objective is to isolate data so that additions, deletions,
    and modifications of a field can be made in just one table and then propagated through the rest of the
    database via the defined relationships.
  • What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
    Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.

    A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.
  • What is blocking and how would you troubleshoot it?
    Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first.

    Read up the following topics in SQL Server books online: Understanding and avoiding blocking, Coding efficient transactions
  • What is a Index? Difference between clustered and a non-clustered index?
    In its most simple terms, an index is a data structure used to provide quick access to data in a database table or view. In SQL Server, they come in two flavors: clustered and non-clustered. Clus tered indexes store the data at the leaf level of the index. This means that whichever field(s) in your table are included in the clustered index, they will be stored in an orderly fashion in the table. Because of this sorting, you can only have one clustered index per table. Non-clustered indexes contain a row identifier at the leaf level of the index. This row identifier is a pointer to a location of the data on the disk. This allows you to have more than one non-clustered index per table.

    A clustered index is a special type of index that reorders the way records in the table are physically
    stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
    A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

    Or
    The leaf level of a clustered index contains the data pages, not just the index keys. So the answer to the question "What else is in the leaf level of a clustered index besides the key value?" is "Everything else"that is, all the columns of every row are in the leaf level. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called a page chain. (Note that pages in a heap are not linked together.) The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to
    In a nonclustered index, the leaf level does not contain all the data. In addition to the key values, each index row in the leaf level (the lowest level of the tree) contains a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms. If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. If the table is a heap (in other words, it has no clustered index), the bookmark is a row identifier (RID), which is an actual row locator in the form File
  •  
  • in 2000 and 2005 ---249 noncluster index 
  • in 2008 sql server 999 noncluster index are available.
  •  
  • What is table without any index called
    o Heap

  • When is the use of UPDATE_STATISTICS command?
    This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly.

  • What is Collation?
    Collation refers to a set of rules that determine how data is sorted and compared. Character data is
    sorted using rules that define the correct character sequence, with options for specifying asesensitivity, accent marks, kana character types and character width.

     different type of Collation Sensitivity:::::
    Case sensitivity
    A and a, B and b, etc.
    Accent sensitivity
    a and á, o and ó, etc.
    Kana Sensitivity
    When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana
    sensitive.
    Width sensitivity
    When a single-byte character (half-width) and the same character when represented as a double-byte
    character (full-width) are treated differently then it is width sensitive
  • What is RAID and what are different types of RAID configurations?
    RAID stands for Redundant Array of Inexpensive Disks, used to provide fault tolerance to database servers. There are six RAID levels 0 through 5 offering different levels of performance, fault tolerance.

     What is the difference between a local and a global variable?
    A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection are closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time.

    What is a dirty page and dirty report?
    Dirty Pages: Data that has been modified and Stored in the buffer cache and has not yet been written in to the hard disk. (Transaction Logs)
    Dirty reads: Reading the data that is actually read by Transaction2 which was modified by Transaction 1 and Transaction1 was not yet committed and above that, if Transaction 1 is rolled back then transaction2 has read data that never was modified or exists... This is dirty read.

    What is lock escalation?
    Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL Server 6.5, but from SQL Server 7.0 onwards it's dynamically managed by SQL Server
    Or

    Lock Escalation
    SQL Server automatically escalates row, key, or page locks to coarser table locks as appropriate. This escalation protects system resourcesit prevents the system from using too much memory for keeping track of locksand increases efficiency. For example, after a query acquires many row locks, the lock level can be escalated to a table lock because it probably makes more sense to acquire and hold a single table lock than to hold many row locks. By escalating the lock, SQL Server acquires a single table lock, and the many row locks are released. This escalation to a table lock reduces locking overhead and keeps the system from running out of locks. Because a finite amount of memory is available for the lock structures, escalation is sometimes necessary to make sure the memory for locks stays within reasonable limits.
    Lock escalation occurs in the following situations:
    • The number of locks held by a single statement on one object (index or heap) exceeds a threshold. Currently that threshold is 5000 locks, but it might change in future service packs. The lock escalation will not occur if the locks are spread over multiple objects in the same statementfor example, 2500 locks in one index and 2500 in another.
    • Memory taken by lock resources exceeds 40 percent of the non-AWE (32-bit) or regular (64-bit) enabled memory and the locks configuration option is set to 0. (In this case, the lock memory is allocated dynamically as needed, so the 40 percent value is not a constant.) If the locks option is set to a non-zero value, memory reserved for locks is statically allocated when SQL Server starts. Escalation will occur when SQL Server is using more than 40 percent of the reserved lock memory for lock resources.
  • What is “FillFactor” concept in indexes?
    FILLFACTOR is probably the most commonly used of these options. FILLFACTOR lets you reserve some space on each leaf page of an index. In a clustered index, because the leaf level contains the data, you can use FILLFACTOR to control how much space to leave in the table itself. By reserving free space, you can later avoid the need to split pages to make room for a new entry. An important fact about FILLFACTOR is that the value is not maintained; it indicates only how much space is reserved with the existing data at the time the index is built. If you need to, you can use the ALTER INDEX command to rebuild the index and reestablish the original FILLFACTOR specified.
  • What are “Dirty reads”?
    This behavior occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. For example, say that ClerkA has updated the old value of 25 widgets to 75, but before he commits, a salesperson looks at the current value of 75 and commits to sending 60 widgets to a customer the following day. If ClerkA then realizes that the widgets are defective and sends them back to the manufacturer, the salesperson will have done a dirty read and taken action based on uncommitted data
    What are “Unrepeatable reads”?
    This behavior is also called inconsistent analysis. A read is non-repeatable if a process might get different values when reading the same resource in two separate reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. In the receiving room example, suppose that a manager comes in to do a spot check of the current inventory. She walks up to each clerk, asking the total number of widgets received that today, and adding the numbers on her calculator. When she's done, she wants to double-check the result, so she goes back to the first clerk. However, if ClerkA received more widgets between the manager's first and second inquiries, the total will be different each time and the reads are non-repeatable

    What are “Phantom rows”?
    This behavior occurs when membership in a set changes. It can happen only when a query with a predicatesuch as WHERE count_of_widgets < 10is involved. A phantom occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. For example, let's say that our manager is still doing spot checks of inventory. This time, she goes around the receiving room and notes which clerks have fewer than 10 widgets. After she completes the list, she goes back around to offer advice to everyone with a low total. However, if during her first walk-through a clerk with fewer than 10 widgets returned from a break but was not spotted by the manager, that clerk will not be on the manager's list even though he meets the criteria in the predicate. This additional clerk (or row) is considered to be a phantom
    What are “Lost Updates”?
    This behavior occurs when two processes read the same data and both manipulate the data, changing its value, and then both try to update the original data to the new value. The second process might completely overwrite the first update. For example, suppose that two clerks in a receiving room are receiving parts and adding the new shipments to the inventory database. ClerkA and ClerkB both receive shipments of widgets. They both check the current inventory and see that 25 widgets are currently in stock. ClerkA's shipment has 50 widgets, so he adds 50 to 25 and updates the current value to 75. ClerkB's shipment has 20 widgets, so she adds 20 to the value of 25 that she originally read and updates the current value to 45, completely overriding the 50 new widgets that ClerkA processed. ClerkA's update is lost
    What are different levels of granularity of locking resources?
    SQL Server can lock user data resources (not system resources, which are protected with latches) at the table, page, or row level. It also locks index keys and ranges of index keys. Keep in mind that if the table has a clustered index, the data rows are at the leaf level of the clustered index and they are locked with key locks instead of row locks.
    What are different types of Locks in SQL Server?
    Lock Modes SQL Server uses several locking modes, including shared locks, exclusive locks, update locks, and intent locks, plus variations on these. It is the mode of the lock that determines whether a concurrently requested lock is compatible. We'll see a chart illustrating the lock compatibility matrix at the end of this section. Shared Locks Shared locks are acquired automatically by SQL Server when data is read.
     Shared locks
    can be held on a table, a page, an index key, or an individual row. Many processes can hold shared locks on the same data, but no process can acquire an exclusive lock on data that has a shared lock on it (unless the process requesting the exclusive lock is the same process as the one holding the shared lock). Normally, shared locks are released as soon as the data has been read, but you can change this by using query hints or a different transaction isolation level.
    Exclusive Locks
    SQL Server automatically acquires exclusive locks on data when the data is modified by an insert, update, or delete operation. Only one process at a time can hold an exclusive lock on a particular data resource; in fact, as you'll see when we discuss lock compatibility, no locks of any kind can be acquired by a process if another process has the requested data resource exclusively locked. Exclusive locks are held until the end of the transaction. This means the changed data is normally not available to any other process until the current transaction commits or rolls back. Other processes can decide to read exclusively locked data by using query hints.
    Update Locks
     Update locks are really not a separate kind of lock; they are a hybrid of shared and exclusive locks. They are acquired when SQL Server executes a data modification operation but first needs to search the table to find the resource that will be modified. Using query hints, a process can specifically request update locks, and in that case the update locks prevent the conversion deadlock situation
    Intent Locks
    Intent locks are not really a separate mode of locking; they are a qualifier to the modes previously discussed. In other words, you can have intent shared locks, intent exclusive locks, and even intent update locks. Because SQL Server can acquire locks at different levels of granularity, a mechanism is needed to indicate that a component of a resource is already locked. For example, if one process tries to lock a table, SQL Server needs a way to determine whether a row (or a page) of that table is already locked. Intent locks serve this purpose. We'll discuss them in more detail when we look at lock granularity.
    What are different Isolation levels in SQL Server?
    Uncommitted Read In Uncommitted Read isolation, all the behaviors described previously except lost updates are possible. Your queries can read uncommitted data, and both non-repeatable reads and phantoms are possible. Uncommitted read is implemented by allowing your read operations to not take any locks, and because SQL Server isn't trying to acquire locks, it won't be blocked by conflicting locks acquired by other processes. Your process will be able to read data that another process has modified but not yet committed. Although this scenario isn't usually the ideal option, with Uncommitted Read you can't get stuck waiting for a lock, and your read operations don't acquire any locks that might affect other processes that are reading or writing data. Read Committed SQL Server 2005 supports two varieties of Read Committed isolation, which is the default isolation level. This isolation level can be either optimistic or pessimistic, depending on the database setting READ_COMMITTED_SNAPSHOT. Because the default for the database option is off, the default for this isolation level is to use pessimistic concurrency control. Unless indicated otherwise, when I refer to the Read Committed isolation level, I will be referring to both variations of this isolation level. I'll refer to the pessimistic implementation as Read Committed (locking), and I'll refer to the optimistic implementation as Read Committed (snapshot). Repeatable Read Repeatable Read is a pessimistic isolation level. It adds to the properties of Committed Read by ensuring that if a transaction revisits data or a query is reissued, the data will not have changed. In other words, issuing the same query twice within a transaction will not pick up any changes to data values made by another user's transaction. However, the Repeatable Read isolation level does allow phantom rows to appear. Serializable Serializable is also a pessimistic isolation level. The Serializable isolation level adds to the properties of Repeatable Read by ensuring that if a query is reissued, rows will not have been added in the interim. In other words, phantoms will not appear if the same query is issued twice within a transaction. Serializable is therefore the strongest of the pessimistic isolation levels because it prevents all the possible undesirable behaviors discussed earlierthat is, it does not allow uncommitted reads, non-repeatable reads, or phantoms, and it also guarantees that your transactions can be run serially. Snapshot Snapshot isolation is an optimistic isolation level. Like Read Committed (snapshot), it allows processes to read older versions of committed data if the current version is locked. The difference between Snapshot and Read Committed (snapshot) has to do with how old the older versions have to be. (We'll see the details in the section on row versioning.) Although the behaviors prevented by Snapshot isolation are the same as those prevented by Serializable, Snapshot is not truly a serializable isolation level. With Snapshot isolation, it is possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Table 8-1 shows an example of two simultaneous transactions. If they run in parallel, they will end up switching the price of two books in the titles table in the pubs database. However, there is no serial execution that would end up switching the values, whether we run Transaction 1 and then Transaction 2, or run Transaction 2 and then Transaction 1. Either serial order ends up with the two books having the same price.
    9) If you are using COM+ what “Isolation” level is set by default?
    What are “Lock” hints?
    Transact-SQL syntax allows you to specify locking hints for individual tables when they are referenced in SELECT, INSERT, UPDATE, and DELETE statements. The hints tell SQL Server the type of locking or row versioning to use for a particular table in a particular query. Because these hints are specified in a FROM clause, they are called table-level hints. Books Online lists other table-level hints besides locking hints, but the vast majority of them affect locking behavior. They should be used only when you absolutely need finer control over locking at the object level than what is provided by your session's isolation level. The SQL Server locking hints can override the current transaction isolation level for the session. In this section, I will mention only some of the locking hints that you might need in order to obtain the desired concurrency behavior.
     What is a “Deadlock” ?
    A deadlock occurs when two processes are waiting for a resource and neither process can advance because the other process prevents it from getting the resource. A true deadlock is a catch-22 in which, without intervention, neither process can ever make progress. When a deadlock occurs, SQL Server intervenes automatically. SQL Server 2005 produces more detailed deadlock diagnostics than previous versions. In addition, deadlocks can be detected for more types of resources. In this section, I'll refer mainly to deadlocks acquired due to conflicting locks, although deadlocks can also be detected on worker threads, memory, parallel query resources, and multiple active result sets (MARS) resources.
    What are the steps you can take to avoid “Deadlocks” ?
    You should always try to have a standard protocol for the order in which processes access tables. If you know that the processes might need to update the row after reading it, they should initially request an update lock, not a shared lock. If both processes request an update lock rather than a shared lock, the process that is granted an update lock is assured that the lock can later be promoted to an exclusive lock. The other process requesting an update lock has to wait. The use of an update lock serializes the requests for an exclusive lock. Other processes needing only to read the data can still get their shared locks and read. Because the holder of the update lock is guaranteed an exclusive lock, the deadlock is avoided. In many systems, deadlocks cannot be completely avoided, but if the application handles the deadlock appropriately, the impact on any users involved, and on the rest of the system, should be minimal. (Appropriate handling implies that when an error 1205 occurs, the application resubmits the batch, which will most likely succeed on a second try. Once one process is killed, its transaction is aborted, and its locks are rolled back, the other process involved in the deadlock can finish its work and release its locks, so the environment will not be conducive to another deadlock.) Although you might not be able to completely avoid deadlocks, you can minimize their occurrence. For example, you should write your applications so that your processes hold locks for a minimal amount of time; in that way, other processes won't have to wait too long for locks to be released. Although you don't usually invoke locking directly, you can influence locking by keeping transactions as short as possible. For example, don't ask for user input in the middle of a transaction. Instead, get the input first and then quickly perform the transaction
  •  
  • What is the difference between Truncate and Delete ?

    TRUNCATE

    • Less Transaction Log entry because TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction log and hence TRUNCATE is fast
    • TRUNCATE apply Fewer table locks
    • TRUNCATE is DDL Statement
    • TRUNCATE Release the tables spaces to System
    • TRUNCATE Cannot have WHERE Conditions
    • TRUNCATE does not fire trigger
    • TRUNCATE reset the identity to 0 (if table have any)
    • TRUNCATE Cannot be used against the tables involved in TRUNCATE transactional replication or merge replication.
    • TRUNCATE cannot be used against the table used in Indexed view
    • TRUNCATE cannot be used against the table that Are referenced by a FOREIGN KEY constraint.
    • TRUNCATE commands are not tracked by DDL trigger

    Note: TRUNCATE can be roll backed. I have seen many place where it is mentioned that it cannot

    DELETE

    • DELETE FROM TABLE command logs each records in transaction log, hence DELETE is slow.
    • DELETE apply more locks to the table
    • DELETE is a DML command
    • DELETE remove the records but will not release the space to the system
    • DELETE can have WHERE conditions
    • DELETE Fires TRIGGER
    • DELETE do not RESET IDENTITY
    • DELETE Can be used against table used transactional replication or merge replication
    • DELETE Can be used in tables reference by a Foreign Key and tables involved in Indexed view




    What is isolation level in SQL server?

    The five levels of isolation, plus one new database option affecting isolation, are as
    follows:
    Read uncommitted Lowest level of isolation. At this level, transactions are isolated
    just enough to ensure that physically corrupted data is not read. Dirty reads
    are allowed because no shared locks are held for data reads, and exclusive locks on
    data are ignored. See the section below for a description of dirty reads. (Corresponds
    to the NOLOCK and READUNCOMMITTED locking hints.)
    Read committed Default level for SQL Server. At this level, reads are allowed
    only on committed data, so a read is blocked while the data is being modified.
    Shared locks are held for reads, and exclusive locks are honored. Thus, dirty reads
    are not allowed. There is a new database option that determines the behavior of
    read committed, called read committed snapshot. By default the read committed
    snapshot option is off, such that the read committed isolation level behaves exactly
    as described here. (See next bullet.)
    Read committed snapshot (database option) New for SQL Server 2005, this
    is actually a database option, not a stand-alone isolation level. It determines the
    specific behavior of the read committed isolation level. When this option is on,
    row versioning is used to take a snapshot of data. Provides data access with
    reduced blocking in a manner similar to read uncommitted isolation, but without
    allowing dirty reads. See the section “Read Committed Snapshot” later in
    this chapter.
    Repeatable read Level at which repeated reads of the same row or rows within
    a transaction achieve the same results. Until a repeatable read transaction is completed,
    no other transactions can modify the data because all shared locks are
    held for the duration of the transaction. (Corresponds to REPEATABLEREAD
    locking hint.)
    Snapshot isolation New for SQL Server 2005. This isolation level uses row versioning
    to provide read consistency for an entire transaction while avoiding blocking
    and preventing phantom reads. There is a corresponding database option that
    must also be set to use this isolation level. See the section below titled “Snapshot
    Isolation” for more information.
    Serializable Highest level of isolation; transactions are completely isolated from
    each other. At this level, the results achieved by running concurrent transactions on
    a database are the same as if the transactions had been run serially (one at a time in
    order) because it locks entire.




    What are Lock Hints available ?

    The following list describes the available table-level locking hints:
    HOLDLOCK Holds shared locks until the completion of a transaction rather than
    releasing them as soon they are no longer needed. Equivalent to using the SERIALIZABLE
    locking hint. Cannot be used with a SELECT query that includes the FOR
    BROWSE option.
    NOLOCK Applies only to the SELECT statement. Does not obtain shared locks
    for reading data and does not honor exclusive locks, such that a SELECT statement
    is allowed to read data that is exclusively locked by another transaction, and will
    not block other locks requested on the same data. Allows for reads of uncommitted
    data (known as dirty reads). Equivalent to READUNCOMMITTED.
    PAGLOCK Acquires page locks where either a single table lock or individual row
    or key locks would normally be used.
    READCOMMITTED The default isolation level for SQL Server. Applies to read
    operations, such that shared locks are acquired as data is read and released
    when the read operation is complete. This behavior changes if the option
    READ_COMMITTED_SNAPSHOT is ON. (This option is new for SQL Server
    2005.) In this case, locks are not acquired and row versioning is used. (See more on
    this in the “Isolation Levels” section of this chapter.)
    READCOMMITTEDLOCK New for SQL Server 2005. Equivalent to READCOMMITTED,
    but will apply whether the setting for READ_COMMITTED_SNAPSHOT
    is ON or OFF, allowing you to override that setting.
    READPAST Applies to read operations; skips reading rows that are currently
    locked by other transactions so that blocking does not occur. The results are
    returned without these locked rows as part of the result set. Can be used only with
    transactions running at the READ COMMITTED or REPEATABLE READ isolation
    levels. Applies to SELECT, DELETE, and UPDATE statements but is not allowed in
    the INTO clause of INSERT statements.
    READUNCOMMITTED Equivalent to NOLOCK.
    REPEATABLEREAD Performs a scan with the same locking behavior as that of a
    transaction using the repeatable read isolation level.
    ROWLOCK Acquires row locks when page or table locks are normally taken.
    SERIALIZABLE Equivalent to HOLDLOCK. Performs a scan with the same locking
    behavior as that of a transaction using the SERIALIZABLE isolation level.
    TABLOCK Uses a shared lock on a table, rather than page or row locks, that is
    held until the end of the statement.
    TABLOCKX Uses an exclusive lock on a table. This hint prevents other transactions
    from accessing the table.
    UPDLOCK Uses update locks that are held until the end of the transaction.
    XLOCK Acquires exclusive locks that are held until the end of the transaction



    What are concurrency Models ?

    -----Pessimistic Concurrency
    With pessimistic concurrency, the default behavior is for SQL Server to acquire locks to block access to data that another process is using. Pessimistic concurrency assumes that enough data modification operations are in the system that any given read operation will likely be affected by a data modification made by another user. In other words, the system behaves pessimistically and assumes that a conflict will occur. Pessimistic concurrency avoids conflicts by acquiring locks on data that is being read, so no other processes can modify that data. It also acquires locks on data being modified, so no other processes can access that data for either reading or modifying. In other words, readers block writers and writers block readers in a pessimistic concurrency environment.
    ------Optimistic Concurrency
    Optimistic concurrency assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. The default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of data rows are saved, so a process reading data can see the data as it was when the process started reading and not be affected by any changes being made to that data. A process that modifies the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. In other words, readers do not block writers and writers do not block readers. Writers can and will block writers, however, and this is what causes conflicts. SQL Server generates an error message when a conflict occurs, but it is up to the application to respond to that error.




    What are the types of fragementation in SQL server 2005

    Internal fragmentation:::: means that the index is taking up more space than it needs to. Scanning the entire table or index involves more read operations than if no free space were available on your pages. However, internal fragmentation is sometimes desirable. In fact, you can request internal fragmentation by specifying a low fillfactor value when you create an index. Having room on a page means that there is space to insert more rows without having to split a page. Splitting a page can be an expensive operation if it occurs frequently, and it can lead to external fragmentation because when a page is split, a new page must be linked into the indexes page chain, and usually the new page is not contiguous to the page being split.

    External fragmentation::::: is truly bad only when SQL Server is doing an ordered scan of all or part of a table or an index. If you're seeking individual rows through an index, it doesn't matter where those rows are physically locatedSQL Server can find them easily. If SQL Server is doing an unordered scan, it can use the IAM pages to determine which extents need to be fetched, and the IAM pages list the extents in disk order, so the fetching can be very efficient. Only if the pages need to be fetched in logical order, according to their index key values, do you need to follow the page chain. If the pages are heavily fragmented, this operation is more expensive than if there were no fragmentation.
    In SQL 2000, DBCC SHOWCONTIG provides information about four types of fragmentation, which include internal fragmentation and three types of external fragmentation. The values reported for two of the types of fragmentation are actually meaningless if a table or index spans multiple files, but the output is returned anyway. For SQL Server 2005, all values reported are valid across multiple files. However, not every type of fragmentation is relevant to every structure. Here are the columns in the output from sys.dm_db_index_physical_stats that report on fragmentation:
    avg_page_space_used_in_percent This float value represents the internal fragmentation, measured as the average across all pages for one particular index level for one type of allocation unit in one partition. This is the only type of fragmentation that is reported for the LOB pages and the row-overflow pages.
    avg_fragmentation_in_percent This float value represents the external fragmentation, which is logical fragmentation for indexes or extent fragmentation for heaps in the in-row allocation unit. A value of 0 is returned for LOB and row-overflow allocation units.

    What’s the difference between a primary key and a unique key?

    Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.

    What is a primary key? What is a foreign key?

    A primary key is the field(s) in a table that uniquely defines the row in the table; the values in the primary key are always unique. A foreign key is a constraint that establishes a relationship between two tables. This relationship typically involves the primary key field(s) from one table with an adjoining set of field(s) in another table (although it could be the same table). The adjoining field(s) is the foreign key

    What does NULL mean?
    The value NULL is a very tricky subject in the database world, so don't be surprised if several applicants trip up on this question.

    The value NULL means UNKNOWN; it does not mean '' (empty string). Assuming ANSI_NULLS are on in your SQL Server database, which they are by default, any comparison to the value NULL will yield the value NULL. You cannot compare any value with an UNKNOWN value and logically expect to get an answer. You must use the IS NULL operator instead

    What's the difference between a server login and a database user?
    A server login connects an account to the server. A database user the link from the server login to a defined database .
    You need to create a server login using CREATE LOGIN before you can then link it to a database using a database user with CREATE USER. The server login logs into the server and grants access to any server wide permissions. The server login is then mapped to a database user for access to database objects.


    What does the term ACID refer to?
    An acronym that describes concurrency support in a database
    The answer is an acronym that describes concurrency support in a database. ACID is an acronym for the following four terms:
     * Atomicity - Every operation within a transaction is treated as a singular operation; either all of its data modifications are performed or none of them is performed.
    * Consistency - Once the transaction is completed, the system must be left in a consistent state.
    * Isolation - It means that the operations within a transaction must be suitably isolated from other transactions. In other words, no other transactions should see data in the intermediate state, within the transaction, until it is finalized. This is done by using locks.
    * Durability - Once a transaction is completed, all changes must be persisted as requested. The modifications should persist in total even in the event of a system failure
  • What is the size of a single read operation from the buffer pool?
    A: 8 KB

    What is the Resource database and in what version of SQL Server was it introduced?
    a. The Resource database is responsible for physically storing all of the SQL Server 2005 system objects. This database has been created to improve the upgrade and rollback of SQL Server system objects with the ability to overwrite only this database.
    b. The Resource database was introduced in SQL Server 2005. i. Additional information: SQL Server 2005 Resource Database 

    What are the SQL Server system databases and can you outline the general functionality of each database?
    a. Master - Database responsible for SQL Server instance related data. You can also think of this database corresponding to the Windows SQL Server service account.
    b. Resource - Database responsible for SQL Server system objects. This database was introduced in SQL Server 2005 and is intended to ease the upgrade and rollback of SQL Server system objects.
     c. Model - Template database for the creation of new user defined databases.
    d. MSDB - Database responsible for SQL Server Agent related data such as Jobs, Alerts, Operators, etc.
    e. TempDB - Temporary database to store temporary tables (#temptable or ##temptale), table variables, cursors, work tables, row versioning, create or rebuild indexes sorted in TempDB, etc. Each time the SQL Server instance is restarted all objects in this database are destroyed, so permanent objects cannot be created in this database.
    f. Distribution - Database responsible for managing replicated data. This database could reside on the publisher or subscriber.

    Can you create objects in the Master, Model and MSDB databases?
           a. True.


    Is it a good idea to create objects in the system databases?
    a. In general , objects should not be created in the system databases. In general, it is a best practice to create a separate database for user defined objects that would be used instance wide or create the objects in each of the needed databases. From a DBA perspective, it is a common practice to create a dedicated DBA database in order to perform custom administrative tasks



    Does Reporting Services create system databases during the installation process?
       a. Yes. The databases are: i. ReportServer - Stores the reports related data.
                                                           ii. ReportServerTempDB


    Can you move system databases?
     a. Yes - Here are the key commands:
                         i. Master - Configuration changes with a stop and restart to the SQL Server instance.
                        ii. Model - sp_detach_db and sp_attach_db
                       iii. MSDB - sp_detach_db and sp_attach_db
                       iv. TempDB - ALTER DATABASE command 


    Do users need explicit rights to the user defined databases?
           a. No - Rights are granted through the Public Role and Guest user.

    What are the typical objects that are created in the TempDB database?
              a. Temporary tables (#temptable or ##temptale)
              b. Table variables
              c. Cursors
              d. Work tables
              e. Row versioning
              f. Create or rebuild indexes sorted in TempDB

    Are all DTS packages stored in MSDB?
               a. All DTS packages do not need to be stored in the MSDB database, but that was the   trend in SQL Server 2000. 


    Are all SSIS Packages stored in MSDB?
         a.All SSIS Packages do not need to be stored in the MSDB database. Storage on the file system or in the MSDB database are more a matter of preference as opposed to an industry trend thus far. 

    Error severity 13 indicates what?
            Transactional deadlock errors.
            This level of error severity indicates a transaction deadlock error.

    Where does Profiler store its temporary data in SQL Server 2005?
             In the directory stored in the system variable TEMP.
            
    Profiler uses the location specified for the TEMP system variable.  




    What is the Service Broker Identifier ?
       A GUID that identifies the database on which Service Broker is running.
         Each database has a Service Broker identifier. This is a GUID in the service_broker_GUID column  that identifies the databases on which Service Broker is running. It ensure that messages are delivered to the right database.
     
    What is the cost threshhold for parallelism in SQL Server 2005?
           This is the number of seconds that a serialplan cannot exceed if it is to be used. A parallel plan is 
    used if the estimate exceeds this value. 
           This is the threshold at which SQL Server determines whether a serial or parallel plan is to be used. When SQL Server calculates that a serial plan exceeds the threshold, it will elect to use a         parallel plan instead. 


     

No comments:

Post a Comment