Chapter 18 - SQL Server Administration


  • Understanding and performing checkpoints - Checkpoints are used by SQL Server to maintain database integrity. You'll see how they work and how they'll impact your implementation of SQL Server.

  • Using the Database Consistency Checker - The Database Consistency Checker provides valuable information about the state of your database and tables.

  • Backing up SQL Server - Any production system must be backed up in order to remain reliable. You'll see how you can use the different tools provided by SQL Server to backup your system.

  • Transferring information to and from SQL Server - Whether you're bringing up a system based on information from another previously existing system, or if you're making changes to other types of existing data sources, you'll need to know how to transfer information to and from SQL Server.

In this chapter you'll be presented with several issues to keep in mind as you administer your SQL Server system. There are many different concepts, routines, and ideas that you need to know for the day-in and day-out management of SQL Server.

The everyday operation of SQL Server will require some of your time to manage the database engine to its fullest potential. By staying on top of the system, you'll be able to ensure the optimum response times for users, and you'll be able to prevent some common problems with the system from happening altogether.

Some of the concepts in this chapter refer to strictly administrative tasks, and some are related more to management of performance and system tuning. Taken as a whole, these topics and the correct definition of your system will make sure you have a successful SQL Server installation.

Understanding and Performing Checkpoints

Checkpoints are a function incorporated by SQL Server to commit changes to a database or configuration option at a known, good point in time. As you work to configure the server or make modifications to the server that require a restart of the server, you may want to initiate the checkpoint process manually.

When a checkpoint is issued, whether by a manual intervention process or by naturally occurring server-based processes, all dirty pages are saved to disk. A dirty page is one containing updates that have not yet been applied to the disk image of the database. Checkpoints normally occur approximately every 60 seconds when they occur without intervention on your part. The actual time frame in which they are called will depend on server loading, recovery options you've set, and general performance tuning that SQL Server will be looking after—but should be very close to 60 seconds.

You may have noticed that if your SQL Server goes down unexpectedly, it can take longer to startup the next time. This is because SQL Server will roll back and roll forward transactions to the last checkpoint. When SQL Server does this, it is restoring the database to the last known good state, which is the one recorded when the last checkpoint was issued and successfully carried out.


You can also manually shutdown the server by issuing the SHUTDOWN command. By issuing CHECKPOINT followed by a SHUTDOWN, you can ensure that all transaction information is saved appropriately.

If you know that you're shutting down the server, you can avoid the longer startup times by manually issuing the CHECKPOINT command. This will accomplish the same thing as allowing the server to issue the command automatically. All information will be saved to disk, and the system will be able to simply startup and "turn on" the databases for access by your client applications. This is helpful if you're shutting down a server quickly, perhaps in a case where you've had power failure and the UPS that is sustaining the server is nearing its life cycle.


The CHECKPOINT command is issued at a database level and is applied against the current database. If you have more than one database in your system, you need to issue the command against each database. In order to issue the CHECKPOINT command, you must be the database owner.

Another option that will prove quite helpful is the Truncate Log on Checkpoint option. This option will automatically truncate the transaction log whenever a checkpoint is reached. To set this option, select the database in the Enterprise Manager and right-click it. Select Edit from the menu, then select the Options tab. Figure 18.1 shows the options that are available.

Fig. 18.1 - You can control the interval in which the transaction log is truncated from the Enterprise Manager.


If you set Truncate Log on Checkpoint as an option, you won't be able to back up the transaction log during the course of standard backups. This may not present a problem in your installation, as you can still do database backups, but you should consider your overall backup plan prior to setting this option. See the sections later in this chapter for more information on backup and restore operations.

If you have Truncate Log on Checkpoint enabled for a database, the transaction log will be truncated up to the point of the last successfully committed transaction, provided replication is not in use. If replication is in use, the log is truncated up to the last successfully replicated transaction and successfully committed transaction. Because replication is transaction-log based, this prevents the log from being truncated in cases where replication has not been propagated to the subscribers for a given publication. More information on replication is presented in Chapter 20, "Setting Up and Managing Replication"

Using the Database Consistency Checker

The Database Consistency Checker (DBCC) is a tool that you use for detailed information about the database objects that SQL Server manages. Because there are so many different facets to the SQL Server system and its handling of tables, objects, rules, triggers, stored procedures and others, it is helpful to be able to go into the server and run a "sanity check" to make sure all is well. The DBCC statement provides this functionality.

Setting Up to Ensure the Best Results: Single-User Mode

Before we get into the use and utility of DBCC, it's important to understand two different conditions that are generally in effect any time you want to use this statement. First, as a rule, you should try to make sure that as little activity as possible is impacting the SQL Server. If people are accessing the server making updates or changes, you may receive errors when DBCC runs. This is due to the nature of the calls that DBCC will perform. They are very low-level and require near-exclusive use of the database in many cases.

Second, you'll often have to ensure exclusive access to the database. In those cases, use the sp_dboption statement to set the database to single user mode. The following is the syntax of that command:

sp_dboption <database name>, 'single user', True
For example, if you want to run some checks on the PUBS database, you'll use the following command:

sp_dboption 'pubs', 'single user', True
This will prevent others from using the system while you perform your checks.


You'll need to be in the Master database prior to updating the options for the system. Be sure you issue a Use Master prior to attempting to set options using sp_dboption.

Once you've turned on single user mode, you can perform the checks you need to make sure the database is running in top shape. When you've completed your work with DBCC, you can set the database back to multiuser mode by changing the True noted earlier in the sp_dboption command to False. For example:

sp_dboption 'pubs', 'single user', False.

Using the DBCC Options

As mentioned earlier, DBCC supports many different options. In the next few sections, I'll describe the most often used options and what they can do to help in your administration of your SQL Server.

Using DBCC NEWALLOC

NEWALLOC has replaced the past use of CHECKALLOC. With CHECKALLOC, the system process would stop if an error was found, sometimes obscuring other problems with the database. NEWALLOC will not stop when an error is found, but will continue on and report all errors that it finds in the database structures. It has the following syntax:

DBCC NEWALLOC <database name>
If you leave out the <database name> parameter, SQL Server will check the current database.

When NEWALLOC runs, it will return detailed information about your system and its database objects. This information can be used to point you in the direction of any problems that may be occurring on the system. The next listing shows a portion of a report, run against the standard pubs database. This sample indicates the type of information you can expect to receive from the NEWALLOC option.

Listing 18.1—Sample Output from the NewAlloc Statement

Checking pubs
***************************************************************
TABLE: sysobjects          OBJID = 1
INDID=1      FIRST=1      ROOT=8      DPAGES=4     SORT=0
     Data level: 1.  4 Data  Pages in 1 extents.
     Indid       : 1.  1 Index Pages in 1 extents.
INDID=2      FIRST=40      ROOT=41      DPAGES=1     SORT=1
     Indid       : 2.  3 Index Pages in 1 extents.
TOTAL # of extents = 3
***************************************************************
TABLE: sysindexes          OBJID = 2
INDID=1      FIRST=24      ROOT=32      DPAGES=4     SORT=0
     Data level: 1.  4 Data  Pages in 1 extents.
     Indid       : 1.  1 Index Pages in 1 extents.
TOTAL # of extents = 2
...

...
***************************************************************
TABLE: pub_info          OBJID = 864006109
INDID=1      FIRST=568      ROOT=584      DPAGES=1     SORT=0
     Data level: 1.  1 Data  Pages in 1 extents.
     Indid       : 1.  2 Index Pages in 1 extents.
INDID=255      FIRST=560      ROOT=608      DPAGES=0     SORT=0
TOTAL # of extents = 2
***************************************************************
Processed 49 entries in the Sysindexes for dbid 4.
Alloc page 0 (# of extent=32 used pages=57 ref pages=57)
Alloc page 256 (# of extent=25 used pages=34 ref pages=34)
Alloc page 512 (# of extent=15 used pages=39 ref pages=39)
Alloc page 768 (# of extent=1 used pages=1 ref pages=1)
Alloc page 1024 (# of extent=1 used pages=1 ref pages=1)
Alloc page 1280 (# of extent=1 used pages=1 ref pages=1)
...
...
Alloc page 31744 (# of extent=1 used pages=1 ref pages=1)
Alloc page 32000 (# of extent=1 used pages=1 ref pages=1)
Total (# of extent=196 used pages=261 ref pages=254) in this database
DBCC execution completed. If DBCC printed error messages, see your 
     System Administrator.
You can see that a huge amount of information is returned. The good news is that you need to look at the report that is returned and search only for the exceptions. You need to key in on problems that are reported, if any, and work with those problems. The balance of the information is provided to confirm database table structures, page allocations, etc. If you do receive an error message, you should also receive specific instructions on what needs to be done to correct the problem.


If there are objects on removable devices, NEWALLOC might return warning message 2558. This warning can be ignored. It is caused by a necessary setting for objects residing on removable devices.

Using DBCC CHECKDB

When you run CHECKDB, each table and its associated data pages, indexes, and pointers are all validated. Each is tested to ensure that it properly links to the related information as it should. CHECKDB has the following syntax:

DBCC CHECKDB <database name>
If you leave out the <database name> parameter, SQL Server will check the current database.

The following excerpts from the resulting listing show what you can expect to see from the CHECKDB option:

Listing 18.2—Sample Output from the CheckDB Statement

Checking pubs
Checking 1
The total number of data pages in this table is 4.
Table has 70 data rows.
Checking 2
The total number of data pages in this table is 4.
Table has 49 data rows.
Checking 3
The total number of data pages in this table is 10.
Table has 283 data rows.
Checking 4
The total number of data pages in this table is 1.
Table has 29 data rows.
Checking 5
The total number of data pages in this table is 23.
Table has 165 data rows.
...

...
Checking 592005140
The total number of data pages in this table is 1.
Table has 14 data rows.
Checking 688005482
The total number of data pages in this table is 2.
Table has 43 data rows.
Checking 864006109
The total number of data pages in this table is 1.
The total number of TEXT/IMAGE pages in this table is 73.
Table has 8 data rows.
DBCC execution completed. If DBCC printed error messages, see your 
     System Administrator.
You'll notice that the option will also check system tables. When you run the command, you'll be checking all tables in all aspects of the database you specify in the command.

Using DBCC SHRINKDB

When you create a database, you often predict high on the disk volume that will be needed to support the database table. Once you've put the database into production, you are able to tell how big the database really needs to be. If you've determined that you can make the database smaller—freeing up space for other databases on your system—you can use the SHRINKDB option. Keep in mind that the SHRINKDB option works in pages. This means that you'll have to determine the size you want to make the database and divide that size by 2048 (the size of the database pages) to get the value to specify for this command. The following is the syntax for SHRINKDB:

DBCC SHRINKDB <database name>, <new size>
It's a good idea to not shrink the Master database, as this database contains all information about your system. If you find that shrinking this database is necessary for some reason, be sure to back it up first. If you consider for a moment that you'll be modifying the database that manages the different recovery mechanisms for the system, you'll quickly understand how, if a system failure occurs while shrinking the Master database, you may not be able to recover it without a backup.


If you issue the SHRINKDB statement without a <new size> parameter, SQL Server will return the size of the smallest possible database. You can use this value to determine the new size of your database.

Current Size of Database   Size Database Can be Shrunk To
------------------------   ------------------------------
32256                      17152 

(1 row(s) affected)

Objects pvnt further shrink      Index 
---------------------------      -----
syslogs                          data 

(1 row(s) affected)

DBCC execution completed. If DBCC printed error messages, see your 
     System Administrator.

A database cannot be shrunk to a size that is less than the model database.

You'll notice that, if you request the smallest possible size, SQL Server will show how far you can shrink the database. In addition, there will be an indicator about whether something additional can be done to shrink the database further. In the listing above in the current configuration, the database can be reduced to 17152 pages, or about 35M. If you are able to manipulate the other items listed as preventing further shrinking, you can make the database even smaller.


You should rarely use the absolute minimum value when shrinking a database because any active database is bound to grow over time and use. If you've decreased the size of the database so severely as to limit growth for a given application, this fact will be at cross-purposes with the design goals of your client/server SQL Server-based application.

Understanding and Using update statistics and recompile

SQL Server gains much of its performance from intelligent processing of data stored in the tables. This analysis comes in different forms, but the most significant is the examination of real data in the system to determine the optimal path for retrieving the information.

As an example of this, consider the route to your house. You've likely figured out the best, fastest way to get to your house from your office. Once you've found the best route, it's a safe bet that you're more likely to take that route—even if someone were to suggest that you select a different route—because you've taken the time to consider the different roads available and selected the best one possible.

In order for you to seriously consider a new route home, there would need to be a new road offering a better route to your house, or there would have to be construction on the existing road that makes it an inefficient road home.

With SQL Server, this analogy holds true. When you implement a stored procedure, SQL Server reviews the SELECT logic of the stored procedure, along with any other data-impacting events. It figures out the best route to take in fulfilling the stored procedure's function. Once this is selected, it remembers this information so that the next time you run the stored procedure, it's optimized for the data it impacts. SQL Server's "roads" consist of the indexes on the data in the system. These are the paths that will be considered when analyzing the best way to retrieve a given value set.

If you've added a number of rows to the table, for example, more than 20 percent of the original table size, you should consider updating the statistics associated with the table. The syntax for updating a table is as follows:

update statistics <table name>[.index name]
for example,

update statistics authors

There are no quotes around the table name you want to update.

You can specify that only a given index be updated or that the information concerning the entire table be updated. If you specify the index, you must specify it as shown, indicating the table in which the index can be found.

The final step to implement these updated indexes and make the stored procedures aware of them is to indicate to the SQL Server that it should reconsider the route to take when retrieving the information from the database. To do this, you'll need to recompile the different stored procedures that are impacted by the update statistics you just ran.

In general, stored procedures are compiled the first time they're called after SQL Server is started. There are other instances that stored procedures are automatically recompiled, but simply issuing the update statistics will not cause an automatic recompile. An example of this is when an index on which a stored procedure is based is dropped. You've probably noticed that the first time you call a particular stored procedure, the execution is a bit slower. Subsequent calls to the stored procedure are often noticeably faster. This is because the optimizer is compiling the stored procedure based on the current statistics of the database tables and any relevant indexes.

You recompile a stored procedure by setting a flag on a table that will basically invalidate any copies of stored procedures that are in procedure cache. This causes SQL Server to reload and recompile the affected stored procedures the next time they are accessed if they referenced the table you flagged. To set the flag, the following command is used:

sp_recompile <table name>
where <table_name> is the table that will be marked for all referencing stored procedures to be recompiled. If successful, the command returns a simple acknowledgment message indicating that the stored procedures will be reloaded and recompiled.

sp_recompile "authors"
go
Each stored procedure and trigger that uses table authors
will be recompiled the next time it is executed.

If you don't recompile the stored procedures impacted by update statistics, the performance gain you seek will not be realized until the server is stopped and restarted again, as this is the next time that the procedures will be reloaded and compiled.

You can't hurt anything by starting the update statistics and recompile operations, but you should run them when the fewest users are on the system. This is especially true if your database sizes are substantial. The time to update these parameters can be quite a hit on the system performance for users of your system. It's a good idea to make updating statistics a part of your regular ongoing maintenance, perhaps running the process on each of your high-use tables approximately once a month during high database throughput. When you're first bringing up a system and when you're adding information at a high rate, you may want to consider frequent calls to this procedure—perhaps as often as once per day of heavy, new data input.

Backing Up and Restoring Databases and Transaction Logs

One of the most dreaded things to hear when you call for help with your application is, "Well, when was your last backup?" It only usually happens once, but when it does, and you're caught without a backup, it can be very painful.

With SQL Server, this fact is often many times more important than on a stand-alone system. You'll often be supporting a database that is used by many, many people and having the information on the system held in good standing is very important. Data loss may incur costs across departments and may involve significant data re-entry.

SQL Server provides a couple of different ways to protect from data loss—or at least minimize it—should the worst happen. These techniques range from physical duplication of the information to backing up the databases and transaction logs at specific intervals during the course of business.

In the upcoming sections, you'll see how you back up the system and what tradeoffs to consider in determining your optimum backup scenario.

How Often to Back Up?

One of the first things that a new SQL Server customer will ask is, "How often do I need to back up?" The answer is simple, but seems sarcastic when it's first posed. Quite simply, how much data can you afford to lose?

The first answer to this is usually "none." This is fine, but then it's time to sit down and determine what the impact will be on the system plan to incur no loss of data should a catastrophic system failure occur. Typically, this involves mirroring databases, which incurs hard-dollar costs to implement. For more information on mirroring a database on your system, see Understanding and Using Mirroring later in this chapter.

Because the cost to mirror a system can be quite steep, the next step is to determine what amount of data can truly be at risk at any given time. To determine this, it's important to first understand that you'll be backing up two different components to the system. As you've seen throughout this book, the database contains the different tables and all other objects associated with the database. The transaction log contains an incremental log of the things that happen to a database that change it. In essence, the transaction log contains a before and after picture of the data that is changed with each block of work performed by the server.

The key thing to remember is that the database backup is a snapshot in time, and the transaction logs contain all of the changes since that snapshot was taken. At a minimum, you'll want to back up the database once a week with daily backups of the transaction log.

The best plan is to have a system of rotating backups. This is the safest and most secure way to ensure that you have the correct fault tolerance for any system failures you may experience. If you consult any computer professional, you'll find that they all agree on one thing: your system will fail; it's only a question of when. You should have complete, concise, and accurate backups. They are the only lifeline you have in times of hardware failure.

Table 18.1 shows a suggested backup schedule. Over the course of a year, assuming you're backing up to tape, this approach will require 29 tapes. Fourteen tapes are used in the weekly backup of your system and 13 are used to maintain monthly archives. The remaining two tapes are used as 1) working transaction log backups and 2) as an initial, baseline backup of the database. Remember, this is to maintain backups, not to retain historical data on the system. You should not count on backups for retention and research on historical information. You'll need to rely on alternative backups for that purpose.

Number the tapes sequentially on a permanent label on each tape cassette.


If your database is large, you may need more than one tape per day, week, or month. In this case, consider the tape numbering scheme to be a tape set numbering approach.

Table 18.1—Suggested Backup Approach
Tape #Used ForComments
1Monday backupBackup for the first Monday of two-week cycle
2Tuesday backup
3Wednesday backup
4Thursday backup
5Friday backup
6Saturday backup
7Sunday backup*
8Monday backupBackup for the second Monday of cycle
9Tuesday backup
10Wednesday backup
11Thursday backup
12Friday backup
13Saturday backup
14Sunday backup*

* These tapes should be removed from the physical site and placed in a separate location.

With this backup schedule, you'll have a full two weeks of backups to fall back on should a problem arise. I recommend a two week backup cycle because you may determine that you have a problem, but it wasn't recognized immediately. Perhaps a program change or other system event caused the problem and it was undetected for a few days. By having the two weeks of backups, the odds are good that you have a clean copy of the information in those recent archives.

Backing Up and Restoring Databases

When you back up your databases and transaction logs, you do so by dumping the information in your system to a dump device, which is recognized by SQL Server as a repository for information and can be either a disk file or a tape device. Once SQL has dumped information to the device, the file it creates (in the case of a disk-based device) can be backed up to tape, another server, or some other location where you will be managing the backup files.

The best way to manage the backup of information is with the SQL Enterprise Manager. From the Tools menu, select Database Backup/Restore to start working with the backup subsystem. When you do, you'll be presented with the main Database Backup/Restore dialog box shown in Figure 18.2.

Fig. 18.2 - The main control panel for backup and restore operations enables you to select and create devices and set scheduling options.

Setting Up Backup Devices

The first thing you'll need to do is to establish the backup devices to be used as the destination for information dumped from the working databases and tables. Devices that are already known to the system are listed in the dialog box in the Backup Devices window. You can accept one of these devices, or you can create your own device.

To create a device, click the New button below the Backup Devices window. This will call up the New Backup Device dialog box, shown in figure 18.3, and will prompt you to name the device.

Fig. 18.3 - You can create new devices that will identify a tape or disk destination for backups.

Notice that, as you type in the name for the device, the location will be updated to include the name you use.


You can specify a path that points to another physical system when you indicate the backup devices that you want to use. By doing so, you can provide good backup coverage without the need for additional storage media.

Consider creating a new device for each day in the 14-day cycle mentioned above. Then, when you create a scheduled backup, you won't necessarily have to use a tape to store the information if you point to the device on the remote system and use it as a backup destination. If you set up the system to back up to the devices without the append flag, each of the devices you create will be only as big as needed for a single copy of the database you're backing up.

Also, because the likelihood that both your core SQL Server system and the remote system will experience downtimes simultaneously is quite small, you are assured of solid backups for your system.

The downside points to remember for this approach include the following:

  • The remote system must always be accessible to the backup and restore process.

  • The remote system must have enough disk space to support the dump devices.

  • If your building burns down and both systems are in the same building, you've lost your backup system. This is a major reason for using off-site tape or other media backups.


Keeping these rules in mind, you can see that a solution that offers the best of both worlds would be a combination of remote system backups and off-site storage for tape backups made slightly less frequently.


Be sure to take note of the file name if you're backing up to a file on disk instead of tape. This file name is the name you'll need to back up to tape or elsewhere on the system when you need to copy the backup file off the system for storage.

After you've indicated the name of the device you want to create, simply press Create. SQL Server will create the new device and you'll see it listed in the backup devices dialog box. You're all set and ready to start creating the backups you'll need to support your SQL Server. Figure 18.4 shows a database with 14 backup devices, one for each day of a two-week backup cycle.

Fig. 18.4 - The Master database now has a backup device for each day of a two-week backup cycle.

The next sections will detail what's involved in setting up the backups on the system.

Running the Backup

After you've created the devices you need to support the backups, you need to do the actual backup. To that end, you have two options available. The first option is to run backups on demand. Although this will work fine, it will require manual intervention each time you want to back up the system. This approach is more prone to forgetfulness, unexpected meetings, and the like than an automated solution.

The following are some general points you need to keep in mind as you determine your backup strategy:

  • Be sure you back up the transaction logs between backups of the databases. Backing up the transaction log is no different than backing up a database. It's simply an option you select during the definition of the backup.

  • Be sure you back up each database that you need to protect. There is no command that backs up the entire system, so you'll need to create backup processes for all databases you are concerned with.

  • Be sure you back up the Master database. Because this database includes all information about the other database objects that are in the system, it's very important to have this information in case you need to restore the entire system and rebuild from scratch.

You'll need to select the type of backup, either manual or scheduled/automatic, that will best fit the systems you are supporting.

Completing Manual Backups
Manual backups are quite straightforward and are a good way to get a feel for how the system works, the time it takes to back up, and how much impact a database backup will have on your system if other users are on the system at the time that a backup is started.


When a backup is first run, the device must be initialized. This is done by selecting the Initialize Device checkbox in the Options section of the Backup page. If a backup is attempted without initializing the device, you will be prompted with a warning stating that the device is either offline or uninitialized.

To complete the backup, select the device you want to back up to and click Backup Now. The Backup Volume Labels dialog box appears (see fig. 18.5).

Fig. 18.5 - If you select a device that has already had a backup completed, you'll be prompted to pick the label to which information should be appended.

Once you select the device and click OK, the backup will be completed. You'll be presented with a status indicator showing the progress of the backup. Finally, you'll be presented with a dialog box confirming that the backup was successful.


I've followed the steps for creating the backup, but when I run the backup, it fails. What should I be looking for to solve this problem?

The first thing you'll need to examine is the free space on the dump device you've created. Make sure you have enough free space for the device to grow to accommodate the information you're saving. Remember, too, that if you have the Append option selected, the information you save will be added to the prior information each time you run a backup. This will eventually lead to some very sizable dump devices. By using the rotating tapes as noted above, you can safely back up to separate dump devices each time you back up because you'll have the prior backup's information stored safely away.


When I try to back up the Transaction Log, I get a message that indicates that I can't dump the transaction log while Truncate Log on Checkpoint is set. What do I do?

You may recall from the previous section, Understanding and Performing Checkpoints that one way of helping to manage transaction log size is to turn on the Truncate Log on Checkpoint option. Another way to manage the size of the log is to dump the transaction log to a backup device. SQL Server will, by default, also truncate the log at that point after a successful backup. Because both operations take control over truncating the log, they are mutually exclusive. You'll need to turn back off the option to truncate the log on checkpoint before you can successfully back up the transaction log.

Scheduling Automated Backups
The safest bet, especially for incremental backups, is a regularly scheduled, automated backup. As shown in figure 18.6, you have set up automated backups by clicking the Schedule button (refer to fig. 18.4) after you've indicated the database and the device to which you want to back up.

Fig. 18.6 - Scheduled backups offer a way to avoid relying on human intervention for the completion of this critical job.

If you're doing append-type backups, you'll be prompted to indicate the volume you want to append to. Once you have indicated the volume, you'll see the Task Schedule dialog box, as shown in figure 18.7, enabling you to indicate the details for the scheduled backup.

Fig. 18.7 - You have complete flexibility in setting the times for your backups, the recurring intervals, and more. In this example, we've set up Monday, Wednesday, and Friday backups.


For future reference, take note of the SQL Command text box (refer to fig. 18.6). It shows you what commands you can run from ISQL that will complete the backup manually. It's good practice to keep up with the commands you can enter manually, as you may want to implement them in your own application at a later date.

The first two options on the dialog box enable you to do a single backup at some pre-determined time. These are good first passes and we highly recommend that your first backups be scheduled attended backups so you can make sure all is working well, the files are created correctly, and that your backup process works in general.

Once you've determined that the backup process has gained your confidence, you can change the allotted times, create recurring backups, and more. To change the times at which the backup occurs, first select the Recurring option, then click Change (refer to fig. 18.6). You'll be presented with the dialog box shown in figure 18.7, indicating the different options you have in determining the times for your backups to run.

The final step is to set up any options you want to enable. This is done by clicking the Options button in the Schedule Backup dialog. The most likely of these is the logging to the Windows NT Event Log. It is highly recommended that you turn on the option to log successes, as well as failures, to the log. This will give you a mechanism to check on the success or failure in a definitive sense. You'll know for sure whether it succeeded. Figure 18.8 shows this option set. The remaining options are used for e-mail notification if you're using the mail services of SQL Server, and they enable you to establish retry parameters. If you are having failures because of network traffic to a remote server where your device is maintained, you may want to increase the number of retries.

Fig. 18.8 - Be sure to check the option to log to the event log for successes as well as failures. This is a good mechanism for following up on your automated processes.


If your system is in need of modification to the retry parameters, it's likely that you should investigate the cause of these before you implement retries. If you have a less reliable connection to the server containing the device you use for backup, consider moving the device. Backups are too important to leave to possible problems due to network traffic or other things that will cause you to need retries.

When you click OK to close the dialog boxes that enable you to define the backup schedule, you'll be returned to the Database Backup/Restore dialog box. You'll notice right away that there is no indication that the backup you've just created was created successfully. This is a bit disconcerting, but take heart. The information about the backup is now part of a scheduled background task. As such, you need to review it from the Task Scheduling portion of SQL Enterprise manager rather than the backup and restore utility dialog boxes. From the Server menu, select Scheduled Tasks. The Task Scheduling dialog (see fig. 18.9) is what you'll find, and you should see your backup job listed as one of the first items in the list of things to do.

Fig. 18.9 - Scheduled backups are created as background tasks that can be managed from the Tasks utility in the Enterprise Manager.


If you are at all uncertain about what tasks you have created that reflect the backup job, be sure you determine which backup jobs you need before you start working with tasks. Because all background activity is managed through the Tasks listing, you can cause damage to processes that control replication, general cleanup, and many other system jobs if you remove or modify an incorrect item.

You can also use the Task listing to check the status of backups as another way of checking to see if there are any problems. You should use a combination of the Task listing and the NT Event log to determine the cause of any problems you may encounter during the backup process.

Don't forget that you need to be absolutely certain you've backed up all of the following:

  • The Master database

  • All databases that you would be sorry if you lost

  • All transaction logs for each database you have in production or which is undergoing significant testing.

Using and Understanding Information Restoration from Backups

Once you've created the backups and have faithfully been backing up your database, how do you recover the database if something does go wrong? This is where you'll be glad you put into place a formal backup plan, as it will guide you to restoring your system to full functionality in the least amount of time possible.

The following are the steps for restoring your system:

  1. Install SQL Server if needed.

  2. Restore the Master database if needed.

  3. Re-create the devices if needed.

  4. Restore the last full database backups you completed.

  5. Restore the transaction logs that were backed up since the database was backed up.

At the completion of this, you'll have a fully functional system that will be up-to-date as of the last transaction log backup.

You've already learned about installing SQL Server and creating devices, so I won't cover that information here, but if you'd like more information on this, see Appendix D, "What's on the CD?" and Chapter 4, "Creating Devices, Databases, and Transaction Logs"


When you create a new database, you should create it with the Create for Load option selected. This will cause SQL Server to create the database without the initial sets of pointers in the database. Because you'll be loading this information from the backup anyway, there's no reason to have the work done twice. The time saved in creating the new database can be nearly 50 percent of the time required to create an entirely new database with the option deselected.

When you select the Tools, Database Backup/Restore option, and then select the Restore tab, you'll first be presented with a dialog box containing all of the valid backups available. This listing will show the date and time of the backup, the type of backup (database or transaction log), and so on. Figure 18.10 shows the options available on the Restore tab.

Fig. 18.10 - The restore options enable you to designate the source and destination for restoration efforts.

SQL Server will automatically present the list of databases for which a database backup has been completed. This enables you to select the different backups that you want to apply to the database you designate in the drop-down list box.

If you know that the information on a given backup is good until a certain time, you can indicate that in the Until Time text box. Note that any pending transactions at the time you specify will be rolled back and it will be as if they never occurred. In most cases, you'll need to restore the database and transaction logs in their entirety, so default options will be fine.

Click Restore Now and the database and transaction log(s) will be restored.

If you created the new database with the Create for Load option, you'll need to update the database options before anyone else can use the database. You'll notice that, until you do this, the database you're working with will be flagged in the Enterprise Manager's display as "(loading)." During this time, no user except the DBO can use the database. To update the options and allow other users access to the database, select Manage, Databases from the menus. You'll see a dialog box similar to the one shown in figure 18.11.

Fig. 18.11 - You must remember to update the options for any database created explicitly for loading. Otherwise users will not be able to access it.

Double-click the bar representing the database you need to update. You'll be presented with the now-familiar Edit Database dialog box. Select Options, then deselect the DBO Use Only option (See fig. 18.12).

Fig. 18.12 - The DBO Use Only option indicates that the database is being loaded and is not ready for production use. Deselect the option to "turn on" the database for other users.

Once you've completed this step, you're restored to the point of the most recent transaction log backup and should be ready to begin some quick testing to see that everything restored correctly and boost your confidence in the process. At this point, the data is restored, all objects in the database are restored and, to the users, there should be no change in operation whatsoever.

In the cases of recovering from a system failure, practice will certainly play a key role. As a test of this process, consider using the PUBS database as a Guinea pig. Follow these steps:

  1. Perform a database backup of the PUBS database.

  2. Make a change to one of the authors in the Authors table. The specifics of the change don't matter: you can add a row, change a name, or whatever you will be able to easily recognize later.

  3. Perform a transaction log backup of the database. You've now backed up the basic database, and you've backed up the change you made as a sort of "delta" (or difference) backup.

  4. Create a new database (for example, RestoreTest). The name does not matter, only that it has sufficient size to support the PUBS database. Make the database size 33M and the transaction log size is 30M. Restore the database only to your system. You'll have to deselect the transaction log restoration option.

  5. Go into ISQL/W and select the rows from the Authors table. They should be all represented correctly, as they existed prior to your change. Your changes won't yet appear because the changes occurred after the database was backed up. In a traditional backup scenario, this was as good as it got for recovery.

  6. Restore the transaction log only now. Re-query the authors table, and your changes will be there. This proves that the transaction log changes were correctly logged against the database table.

You can and should experiment with different scenarios using the test database approach. This way, when it comes time to restore your database in a real crisis, you can rest assured it will go smoothly. Think of these tests as the fire drill for database recovery!

Understanding and Using Mirroring

In addition to the backup and restore techniques covered in the previous sections, SQL Server also supports an additional fault-tolerant option: Mirroring. Because hard drives are the most likely candidate for a system failure that impacts data, you may want to consider this option to protect your databases.


Windows NT also supports several fault-tolerant options that are enforced at the operating system level. For more information, please refer to Que's Special Edition Using Microsoft Windows NT Server and see the information topics regarding striped disk storage.

Mirroring takes place at the device level and is transparent to your applications. When you create a device and indicate that it should be mirrored, you simply tell SQL Server where you want to locate the mirror. Take care in selecting the location of the mirror. You'll be doing yourself little or no good if you place the mirror device on the same physical drive as the source device. If you do, and the drive fails, you will lose both the source device and the mirror.

Overview of Mirroring

Mirroring is done when SQL Server writes changes to a device to two locations at once. The primary location is your standard device. This is the database that you are normally using and have incorporated in your system's applications. Because both devices are identical, if a switch is required from the main device to the secondary device, there is no data loss and the users of your system will not be interrupted in their use of the system.

As SQL Server uses a device, if it can no longer access the main device, it automatically switches to the mirror device, making all future changes to the mirror. When you've recovered the drives containing the problem device, you can move the information back to the original device and restart SQL Server using the corrected devices as the default once again.

Setting Up Mirroring

When you create a device—or after the fact, if you edit a device—you'll notice an option to establish mirroring. In Figure 18.13, you can see the options that enable you to work with a device.

Fig. 18.13 - Right-click a device and select Edit to modify mirroring options.

Click on Mirroring to indicate where the mirror file will be located. Remember, the paths you designate are relative to the server. If you're working from a workstation, be careful to keep this in mind. You can select a location that is either on the current server or on a fully qualified network path (see fig. 18.14).

Fig. 18.14 - When you select the location of the mirror file, SQL Server will mirror the requested database.

After the process has completed, the device will be mirrored and SQL Server will take care of the rest. From that point on, any informational changes to the primary device will also be reflected in the mirror device.


If you implement mirroring for a device, be sure you also mirror the Master database. This database is responsible for maintaining the system information necessary to continue processing should mirroring become necessary for any device. When you mirror the Master database, you're ensuring that you'll be able to enable SQL Server to recover to the mirrored devices that have failed.

If you do not mirror the Master database, you may find that you cannot utilize the mirrors for other devices, as this information (the information about the mirrored devices managed by SQL Server) is maintained in the Master database and could be lost.

The final step is to tell SQL Server about the mirrored device in the case of the Master database. Because the Master database is used to start the server, you need to indicate to the server where the mirror can be found if it is needed. Other devices are managed automatically, so this step is only required for the Master database.

There are two parameters required at SQL Server startup that control the Master database's mirrored state: –r and –d. Figure 18.15 shows the SQL Server Configuration/Options dialog box. Choose Server, SQL Server, Configure to open this dialog box. Click the Parameters button on the Server Options page and then add the startup parameters.

Fig. 18.15 - Startup options that enable mirroring the Master database can be set in the SQL Server configuration subsystem.

Figure 18.16 shows the dialog box that enables you to indicate the startup options.

Fig. 18.16 - Be sure to specify both –r and –d options for the Master device. If both are not found, the mirroring will not work correctly.

Add a new parameter, –r<mirror location>, that indicates the location of the mirror for the master device. Note that, even after SQL has switched to the mirror device during normal use, when you restart the system, it must still first try the primary Master device. This means that you must leave the –d specification that is showing the location of the "normal" Master device. When you add the –r option, you're specifying the fallback position that SQL should use when it has determined the master device to have become unusable.

Click the Add button to create a new parameter, and be sure to specify the entire path and file name to the mirror device file when you create the –r parameter.

What to Do When Mirroring is in Force

When SQL Server has switched to a mirror, it automatically suspends mirroring with the original device. This frees up the device to be replaced or otherwise corrected. When the mirroring is active, the users of your system will not notice any difference in the functioning of their applications. They'll be able to continue working just as they did before the problem was found with the original device.

Once you've replaced the original device or otherwise corrected the problem that was detected, you can re-mirror the active device back to the primary device. Re-mirroring is identical to the original job of mirroring a device. You simply specify the file name that will contain the mirror, and SQL Server will take care of the rest.

If you want to switch from a current device to the mirror, as may be the case where you've just brought online a replacement for a failed device, the following are the steps you'll follow, from the creation of the original mirror through failure of the device to replacement of it and re-mirroring to return to the original state:

  1. Mirror the original device.

  2. The device fails; SQL Server switches to the mirror device and continues processing.

  3. Replace or correct the original device.

  4. Re-mirror the active device back to the original.

  5. Unmirror the device, selecting the option to Switch to Mirror Device—Replace option.

  6. Mirror the device—now the replacement for the original—back to the backup device.

At this point, you're back where you started with the original device being the active device and the mirror standing by as needed to back it up.

As you can see, Mirroring your devices can provide powerful backup capabilities to your key production systems. For more information about backups, be sure to review the backup and restore options presented in the first half of this chapter.

Transferring Information to and from SQL Server

The SQL Transfer Object utility provides an excellent way to move information between databases to your system. To start the Transfer utility, select the source database. Next, select the Transfer option from the Object menu (or right-click the database). Figure 18.17 shows the initial Transfer Objects dialog box.

Fig. 18.17 - Select the options carefully when you use the Transfer utility. They control the objects that will be copied to the new database.

By default, copying an entire database to a new location is as simple as selecting the source and destination database and clicking the Start Transfer button (refer to fig. 18.17). SQL Server will copy all of the different objects to the new database and set it up to match the source database.

You can also initiate the copy process at a scheduled time, such as at a time when server traffic is lower and the copy process will not as adversely impact the users of the system. By selecting Schedule, you can indicate the time at which you'd like the copy to occur. As shown in figure 18.18, the scheduling capabilities are nearly identical to those you used in setting up backups for your server.

Fig. 18.18 - The Transfer utility can be used to make periodic copies of key information in your system.

If needed, you can even use the Transfer utility to make rudimentary backups of your system by copying databases to backup devices and databases possibly located on a different server altogether. You should, however, avoid using the Transfer process as a production means of backing up your system. It's not meant to be an end-all solution to the backup needs for your system and will not support the same level of logging or critical processing checks that the backup process affords. There is, for example, no way to transfer only a transaction log for a given database.

If you deselect the Transfer All Objects option in the Transfer Objects dialog box (see fig. 18.19), you'll be able to specify exactly what objects should be included in the copy by clicking the Choose Objects button.

Fig. 18.19 - You can select several options that control how objects are transferred during this process.

Figure 18.20 shows the different objects that you have available with a typical copy.

Fig. 18.20 - You can selectively copy specific objects or attributes for your database. You can use this feature to update another system to keep it in sync.

You can also indicate the different options that are transferred as part of the scripting process. As the Transfer utility puts the information from your source database into the destination system, it will script the creation of tables, logon IDs, and other such items that are not information-content based, but are structure or configuration related. You can control these items, as shown in figure 18.21, by deselecting the Use Default Scripting Options check box in the Transfer Objects dialog box (refer to fig. 18.19) and clicking the Scripting Options button.

Fig. 18.21 - You have complete control over the objects and configurations that will be included in the transfer process. You can deselect logons if you've already set up security on the destination device, for example.

The Transfer Objects utility is a good companion to the backup and mirroring features you've seen earlier in this chapter. By putting these to use in your installation, you'll be able to ensure a maximum amount of up time for your user base. Use the Transfer Objects utility to make copies of critical system components, then use mirroring and backups for ongoing protection against catastrophic system failures.

From Here...

In this chapter, we've explored some very important administrative utilities that provide the support for good, solid, fault-tolerant system design for your users. The time is now to implement these items—not later. If you wait until you need them, it will be too late. Keep in mind the old computer-support adage we mentioned earlier: Components will fail, it's only a matter of time. Plan for it; be ready for it. It will be far less painful and less difficult to recover from.

The following are some additional sources for some of the topics covered in this chapter that will help you in your administration of SQL Server:

  • Chapter 4, "Creating Devices, Databases, and Transaction Logs" covers the specifics of setting up the different components for your system.

  • Appendix A, "Installation and Setup of the Client and Server Software" explains the details behind the initial setup and configuration of your system. Remember to add the –r option to the startup command line if you'll be using mirroring.

  • Que's Special Edition Using Microsoft Windows NT Server covers more specifics about disk-based recovery and fault-tolerant systems. Be sure to check into striped disk drive configurations and how you can work with backup devices.


QUE Home Page

For technical support For our books And software contact support@mcp.com

Copyright © 1996, Que Corporation

Table of Contents

17 - Optimizing Performance

19 - SQL Server Security