Tracing Data Access

 

Note: An updated version of this article for SQL Server 2008 is available at https://msdn.microsoft.com/en-us/library/cc765421.aspx.

Note: An updated version of this article for SQL Server 2005 is available at https://msdn.microsoft.com/en-us/library/aa964124(SQL.90).aspx.

Bob Beauchemin
DevelopMentor

November 2004

Applies to:
   Microsoft ADO.NET 2.0
   Microsoft .NET Framework 2.0
   Microsoft SQL Server

Summary: Learn how to apply the tracing functionality in ADO.NET 2.0 to track down bugs and bottlenecks in your applications. (13 printed pages)

Download the associated 2005TracingExamples.exe code sample.

Contents

Introduction
Setting Up Data Tracing
Running the Trace
Reading the Trace Output
Configuring Which Applications Are Traced
Using Tracing to Debug a Parameter Binding Problem
Two Key Diagnostic Scenarios
Inside Data Tracing
Conclusion

Introduction

I've been missing a good generalized built-in trace facility for data access since the days of ODBC Trace. OLE DB had many of different types of tracing; two that come to mind are the Visual Studio Analyzer-compatible instrumentation and ATLTRACE (a trace macro for ATL OLE DB templates). The issue with OLE DB and MDAC was not that there was no trace, but that there were too many separate kinds of tracing, each tied to a different evaluation mechanism. It was difficult if not impossible to trace down into various layers of the data access stack and get one trace output.

ADO.NET 2.0 and SQL Native Client (a new OLE DB/ODBC/network library feature) contain a flexible, rich, built-in data trace facility. Microsoft has instrumented all four of its .NET data providers (SqlClient, OracleClient, OleDb bridge, and Odbc bridge), the ADO.NET DataSet and friends, SQL Native client OLE DB and ODBC provider/driver, and to top it off, the SQL Server 2005 network libraries. The information in this article is meant to get you up and tracing quickly, show you how to do some cursory trace file analysis, and look at simple trace use-cases. At the end of the article, I'll talk about how the tracing layer is architected to be extendible to different trace technologies, and suggest more ways in which to use it.

Trace points are already programmed in to the .NET and SQL native client libraries. Using an IL disassembler like Reflector, you may already have noticed them. The first step is configuring things to allow you to get basic output.

The basic steps for using tracing are:

  1. Set up the data tracing DLL registry entry, ETW providers, and WMI schemas.
  2. Configure and run the trace itself.
  3. Harvest the trace results as a comma-separated value file.

Don't worry if some of the acronyms like ETW and WMI are unfamiliar to you for now. Let's go over the steps one-by-one and then go back to a discussion of how they work.

Note If you want to understand how data tracing works before you begin setting it up, read Inside Data Tracing that appears near the end of this article.

Setting Up Data Tracing

Set up the trace DLL registry entry: This step consists of running a registry script or manually editing the registry to hook up data tracing to its Event Tracing for Windows (ETW) provider. Currently, editing the registry is the only way to accomplish this; in the future there will likely be a Control Panel application or a configuration file mechanism. We'll have more to say about using this registry key later; for now let's just enable tracing for every data-related process on the machine.

  1. Locate the registry key HKLM\Software\Microsoft\BidInterface\Loader.
  2. Add a new string value (under the Loader key):

Name=":Path" Value="C:\WINDOWS\Microsoft.NET\Framework\v2.0.40607\AdoNetDiag.dll"

This assumes that you have Beta 1 version of the .NET Framework 2.0 installed and your system drive is the C:\ drive. Note that the colon before Path is significant. You can also run the setup_trace.reg file included with this article.

Register data tracing schema: The AdoNetDiag.dll you have just registered is an adapter component that makes any library that is instrumented for data tracing look like an Event Tracing for Windows provider. You can use the providers now, but for them to show up in public provider lists and in WMI tools, you need to register the ETW providers and their WMI schemas for the events that AdoNetDiag.dll exposes. You do this with a special format schema file called a MOF file and a utility named mofcomp. ADO.NET 2.0 provides a MOF file in the C:\WINDOWS\Microsoft.NET\Framework\v2.0.40607 directory, but, because data tracing is a work in progress, for best results you should use the MOF file that comes with this article. Run the Windows register_mof.cmd script that comes with this article; it runs the registration code. If you issue the command manually from the command line, it would look like this.

  1. mofcomp adonetdiag_beta1.mof

You can check that the providers are correctly registered by listing the ETW providers using the command

1> logman query providers

You should see the providers that you have registered, as well as other providers that come with the OS or other products. Note that each provider is identified by a GUID. Your provider list should look something like this:

Provider                       GUID
--------------------------------------------------------------------------
*System.Data.1                 {914ABDE2-171E-C600-3348-C514171DE148}
ACPI Driver Trace Provider     {dab01d4d-2d48-477d-b1c3-daad0ce6f06b}
Active Directory: Kerberos     {bba3add2-c229-4cdb-ae2b-57eb6966b0c4}
IIS: SSL Filter                {1fbecc45-c060-4e7c-8a0e-0dbd6116181b}
IIS: WWW Server                {3a2a4e84-4c21-4981-ae10-3fda0d9b0f83}
IIS: Active Server Pages (ASP) {06b94d9a-b15e-456e-a4ef-37c984a2cb4b}
MSSQLSERVER Trace              {2373A92B-1C1C-4E71-B494-5CA97F96AA19}
Local Security Authority (LSA) {cc85922f-db41-11d2-9244-006008269001}
*System.Data.SNI.B1            {C22CFA1B-A312-354C-33F7-ACACA46CE990}
*SQLNCLI.1                     {BA798F36-2325-EC5B-ECF8-76958A2AF9B5}
Windows Kernel Trace           {9e814aad-3204-11d2-9a82-006008a86939}
*System.Data.OracleClient.1    {DCD90923-4953-20C2-8708-01976FB15287}
*ADONETDIAG.ETW                {8B98D3F2-3CC6-0B9C-6651-9649CCE5C752}
ASP.NET Events                 {AFF081FE-0247-4275-9C4E-021F3DC1DA35}
NTLM Security Protocol         {C92CF544-91B3-4dc0-8E11-C580339A0BF8}
IIS: WWW Isapi Extension       {a1c2040e-8840-4c31-ba11-9871031a19ea}
HTTP Service Trace             {dd5ef90a-6398-47a4-ad34-4dcecdef795f}
Spooler Trace Control          {94a984ef-f525-4bf1-be3c-ef374056a592}

I've changed the output a bit, adding an asterisk next to the providers that we care about. You can also check out the WMI schemas that were registered by using the WMI CIM Studio tool. You can download the WMI Administrative Tools. The WMI schemas for data tracing are pretty simple, we'll talk about them later.

Running the Trace

Running the trace consists of defining named traces and issuing ETW commands to use them using a utility named logman (log manager). Start the trace by running the script traceon.cmd included with this article. Assuming that you are running Windows 2003, the command in the file looks like this:

@Logman start MyTrace -pf ctrl.guid.adonet.beta1 -ct perf -o Out.etl -ets

This command file defines a single named trace instance (MyTrace) specifying all of the providers that we just registered. The list of providers and options for the providers are specified in a separate file, ctrl.guid.adonet.beta1. The contents of this file are:

{7ACDCAC8-8947-F88A-E51A-24018F5129EF} 0x00000000  0   ADONETDIAG.ETW
{914ABDE2-171E-C600-3348-C514171DE148} 0x00000000  0   System.Data.1
{C22CFA1B-A312-354C-33F7-ACACA46CE990} 0x00000000  0   System.Data.SNI.B1
{DCD90923-4953-20C2-8708-01976FB15287} 0x00000000  0   System.Data.OracleClient.1
{BA798F36-2325-EC5B-ECF8-76958A2AF9B5} 0x00000000  0   SQLNCLI.1

The lines include a GUID for each provider, provider options, and the provider's name (watch for line wrap). We'll say more about the provider options later.

Invoking logman in this manner writes all of the events in a concise binary format to an event trace log file. These files have the suffix .etl by convention. Once you've turned it on, run the sample program, GetDataSet.exe, included with the download for this article. Turn off the trace by running the script traceoff.cmd. This just issues the command:

@Logman stop MyTrace -ets

You should see an approximately 150KB file named "out.etl" in the directory from which you issued the command. Because we've turned the trace providers on at each level of detail (ADO.NET providers, network calls, and responses from SQL Server if it's running on the same machine), there will be a large amount of output. We'll look at ways to filter the output later. Note that bringing up Visual Studio 2005 to run the test program may generate extra trace events when, for example, Server Explorer runs data access code.

Harvest the results as a CSV file:

The out.etl file is not in a human readable format, unless you're one of those rare humans that like to read binary. ETW utilities include a basic formatter named tracerpt.exe to convert it to a comma-separated value file. To get the CSV file, run report.cmd, which issue the following command:

@TraceRPT /y Out.etl

This utility produces two files: summary.txt, a summary of the trace events captured in the session, and dumpfile.csv. These are the default file names output by tracerpt, and you can change them through command-line options. Dumpfile.csv is the file containing the information you want. This file can be browsed with Excel for now; later, you can do some further post-processing, such as loading the data into SQL Server and querying it with SQL. That's all there is to it! You now have your first trace. Let's have a look at the output.

Reading the Trace Output

The data tracing providers expose three major types of information: trace point information and provider identity, event type, and thread and timing information. Specifically, the columns of information consist of:

Event Name—the name of the data tracing event provider

Event Type—TextW or TextA

TID—thread ID

Clock-Time—Timestamp of the event

Kernel (ms)—number of milliseconds in kernel mode (CPU time)

User (ms)—number of milliseconds in user mode (CPU time)

User Data—detailed information about the trace point

Although WMI providers are permitted to expose complex schema, the data tracing providers only expose two simple event types: TextW and TextA. TextW is for Unicode character messages and TextA is for ANSI character messages. Note that many of the data tracing events are bracketing "begin" and "end" pairs which make it easy to follow the nested API calls. For example, in this trace, the User Data field looks somewhat like this:

"enter_01 <comm.DbDataAdapter.Fill|API> 1#  dataSet"
"enter_02 <comm.DbDataAdapter.Fill|API> 1#  dataSet  startRecord  
maxRecords  srcTable  command  behavior=16{ds.CommandBehavior}"
"<sc.SqlCommand.get_Connection|API> 1#"
"<sc.SqlCommand.get_Connection|API> 1#"
"enter_03 <sc.SqlConnection.Open|API> 1#"
"enter_04 <prov.DbConnectionBase.Open|API> 1#"
"enter_05 <SNIInitialize|API|SNI> pmo: 00000000{void}"
... many events deleted
"leave_05"
 "<sc.TdsParser.CreateSession|ADV> 1# created session 2"
 "<prov.DbConnectionFactory.CreatePooledConnection|RES|CPOOL> 1#  Pooled 
database connection created."
 "leave_04"
 "leave_03"

It's fairly straightforward to see that the user program called DataAdapter.Fill(DataSet) called in a different overload of DataAdapter.Fill, that called into SqlConnection.Open, and so on. Each enter_nn event has corresponding leave_nn event. In this example, you're tracing the call into the underlying SNI (SQL Server networking interface) low-level protocol events. But what is that funny bracket format of the user data generated by the "mainstream" tracer, System.Data.1?

User Data and ADO.NET tracing

As the name implies, the content and format of the user data field is entirely the user's (in this case trace provider's) discretion. Calls to System.Data.dll and System.OracleClient.dll go by a special format that can be easily decoded. Take for example the entry from the previous trace record sequence:

"enter_04 <prov.DbConnectionBase.Open|API> 1#"

This can be decoded to:

<namespace abbreviation.classname.methodname|keyword> parms

so the example above means that there is an API call to the System.Data.ProviderBase.DbConnectionBase's Open method. The reason for using abbreviated namespaces is to keep the size of the output smaller. The number followed by a pound sign (1#) after the parameter number serves to identify the specific instance of the DbConnectionBase object; this is helpful when you are working with a complex trace and watching many instances. Here are a couple of charts that will help you decode the user data:

Table 1. Namespace abbreviations used in .NET Trace Points

Description Abbreviation Namespace
SqlClient managed provider sc System.Data.SqlClient
OleDb managed provider oledb System.Data.OleDb
Odbc managed provider odbc System.Data.Odbc
Oracle managed provider ora System.Data.OracleClient
DataSet/DataTable/Data* ds System.Data
Common code comn System.Data.Common
Provider base implementation classes prov System.Data.ProviderBase

Table 2. Categories used in .NET Trace Points

Keyword Category
API Public API (method, property) is called
OLEDB Code calls OLEDB component
ODBC Code calls ODBC API
SNI Code calls SNI
ERR Error
WARN Warning
INFO Information
RET Return value, usually in the form of API|RET
THROW An new exception is being thrown (not applicable to exceptions being re-thrown)
CATCH Code catches an exception
CPOOL Connection pool activities
ADV Advanced trace points

You can use this information to post-process the trace output and look at only DataSet-related events, or choose between OleDb managed events and OLE DB native events. Or you can keep the information together for event correlation. A simple query using the LIKE operator is all you need. Data tracing is in its infancy, however, and the exact details of the format and content may change in the future. Be careful to retest any post-processing code as the feature evolves.

Configuring Which Applications Are Traced

At the beginning of the article, I mentioned configuring the :Path string value of the Loader registry key. Manual configuration is only needed because data access tracing is in its infancy. Specifically, programmers who use data tracing should not depend on this key being manually configurable in future releases. It is already protected by an ACL (security access control list) making it updateable only by the administrator; in future releases it may be read-only.

With that warning out of the way, configuring only the :Path value makes tracing available on all applications running on a given machine. For example, if a programmer is running a SQL Server 2005 instance and a data access client on her/his machine, turning on SNI tracing will trace SNI calls from both sides. That can generate some large output, although in certain use cases this can be exactly the type of output you want. You can configure applications to be traced if there is no :Path value or to be excluded from tracing if there is a :Path value

If there is no :Path value, only applications which are specifically configured will be traced. You can configure applications to be "traceable" by specifying a REG_SZ or REG_EXPAND_SZ entry with the full path to the program's main executable file as the value name and the full path to AdoNetDiag.dll as the value. You can also configure an entire directory to be traceable by using the path name and the * wildcard.

If there is a :Path value, you can restrict applications or applications in a specific directory from being traced by adding a REG_SZ or REG_EXPAND_SZ entry and the value of : (single colon). Wildcards in the directory name are allowed. For example, the name field containing the directory name (C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\*) along with a : value would keep all programs in the SQL Server Binn directory (such as SQL Server, SQL Trace, etc.) from showing up in a trace. If I'm running SQL Server on the same machine as my application, this is what I want most often. Remember that configuring the registry entries just names the data tracing provider DLL (at this point, there is only AdoNetDiag.dll to choose from); it does not turn on the trace. This is a particularly useful feature, as you can configure and set up data tracing before a long-running program (like a Web server or Windows service) executes without turning it on. You can then turn the trace on and off using logman, while the program is running, without disturbing it. There's no performance hit from the trace when you're not using it. If you're tracing a network problem, you don't have to run SQL Server with any special trace flags to get detailed information.

You can also configure which provider information is traced to which files and control what exactly is traced to a certain extent. You can trace the output from all four providers to a single file as we did with the traceon.cmd file, or separate them into one file per provider. You do this by making up named traces using the logman utility. Here's an example that creates four traces:

logman create trace test1 -p System.Data.1
logman create trace test2 -p System.Data.OracleClient.1 
logman create trace test3 -p SQLNCLI.1
logman create trace test4 -p System.Data.SNI.B1

In addition to using logman from the command line, you can also use the Snap-in for Performance Logs And Alerts to configure, run, and stop the trace. Describing how to use this MMC snap-in is beyond the scope of this article.

You have some control over what is traced by manipulating the bits in the logman control.guid file. You do not need to have a control.guid file if you use a single provider per trace and take defaults, as the previous examples show, but traceon.cmd used one. To refresh your memory, here's a single line from control.guid:

{914ABDE2-171E-C600-3348-C514171DE148} 0x00000000  0   System.Data.1

This information in this line consists of:

Provider Guide—which ETW provider

Control Bits—0x00000000, in this case

Control Value—0, in this case

Provider Name—Required by ETW, but ignored by the logman utility

By setting bits in the Control Bits and Control Value fields, you have a macro-level mechanism for pre-filtering the events. The valid values are:

0x0002 Regular tracepoints 0x0004 Execution flow (function enter/leave) 0x0080 Advanced Output

There is also a bit that has special meaning to System.Data.1 only

0x1000 Connection Pooling specific trace

The bits can be "OR'd" together, of course. If 0x00000000 is specified, then 0x00000006 is assumed.

There are two possible non-default values that can be set in the control value:

128 - Convert Unicode text to ASCII text (reduces etl file size)

64  - Disable tracing in this component

Although the Control Bits are a bitmask, the Control Value must be set as a number. Note that setting these control bits does not provide a granular mechanism for configuring individual components, but is meant for making it easy to filter types of events without post-processing the CSV file yourself.

Using Tracing to Debug a Parameter Binding Problem

Now that we've gone through the quick overview of tracing, I'd like to present a simple use case. I'd often use ODBC trace to do problem determination when an application would "eat" a rich error message and produce a polite but fairly "information-free" message. Such application code would look like this:

string s = GetConnectionStringFromConfigFile();
using (SqlConnection conn = new SqlConnection(s))
using (SqlCommand cmd = new SqlCommand(
  "select * from authors where au_id = @auid", conn))
{
  // the error is hardcoded here but could have come from suboptimal 
  // editing in a graphic user interface
  cmd.Parameters.Add("@auid", SqlDbType.Int);
  cmd.Parameters[0].Value = 123456789;
  SqlDataReader rdr = null;
  try {
   // some code that could fail goes here
  conn.Open();
  rdr = cmd.ExecuteReader();
  while (rdr.Read())
     Console.WriteLine(rdr[0]);
  rdr.Close();
  }
  catch (Exception e) {
    MessageBox.Show("polite error message");
  }
}

In this case, the error was caused by a parameter type mismatch, and the person diagnosing the error might not have access to the source code of the program. Turning on the trace, we'll see output like this:

"enter_01 <sc.SqlCommand.ExecuteReader|API> 1#"
 "<sc.SqlCommand.get_Connection|API> 1#"
 "<sc.SqlCommand.get_Connection|API> 1#"
 "<sc.TdsParser.CreateSession|ADV> 1# created session 3"
 "<sc.TdsParserSessionPool.CreateSession|ADV> 1# adding session 3 to pool"
 "<sc.TdsParserSessionPool.GetSession|ADV> 1# using session 3"
 "<sc.TdsParser.GetSession|ADV> 1# getting session 3 from pool"
 "<sc.SqlCommand.ExecuteReader|INFO> 1#  Command executed as RPC."
 "<sc.SqlCommand.get_Connection|API> 1#"
 "leave_01"
 "enter_01 <sc.SqlDataReader.Read|API> 1#"
 "<sc.SqlError.SqlError|ERR> infoNumber=245  errorState=1  errorClass=16  
errorMessage='Syntax error converting the varchar value '172-32-1176' to a 
column of data type int.'  procedure=''  lineNumber=1"
 "leave_01"

This shows us directly that there is a parameter value mismatch. The sample and the trace file are provided in the article code. Note that the trace file is much more compact in this case because we're only tracing with the System.Data.1 provider.

Two Key Diagnostic Scenarios

Sometimes, data tracing can be used to illuminate a problem that might have taken days or weeks to find otherwise. One example is diagnosing a problem with reclamation of resources. On occasion a program logic error might result in the program eventually running out of connections or overloading the connection pool. For example, beginning a transaction and having a logic path where COMMIT or ROLLBACK are never called would result in the connection remaining in the transacted connection pool until the transaction times out. This can produce some strange connection pool behavior that baffles the application programmer (and the diagnostician). A related scenario could be simply failing to close connections, leaving the pooler and timeouts to do that work—this is a "sub-optimal" way to close connections, to put it mildly. Watching the connection pool activity and correlating this with transaction activity by running SELECT statements against a post-processed trace can be a great help in troubleshooting these knotty problems. The "enter-leave" pairs are particularly enlightening in the case, as are the transaction timeout trace entries.

Another scenario is when an application seems to have abnormally long connection times. Because of the way the network libraries can be configured, it's possible that SQL Server can attempt to connect through TCP/IP, for example, then use named pipes or other protocols on the client protocol list when TCP/IP fails. If the first protocol in the list fails because of a timeout, this could look like slow connection or slow response time. You can watch this in graphic detail using data tracing because the trace of the SqlClient and other managed .NET data providers is integrated with the trace from the unmanaged stack and the network. The whole progression of events will even be bracketed in "enter-leave" blocks for you, with timestamps to enable you to see where the Open() call is spending its time.

We've barely scratched the surface of uses for this complex and powerful feature. Some thoughts for your future experimentation are:

  1. Integrated tracing into unit testing
  2. Comparative analysis of DataSet and DataReader calls to determine where DataSet is spending its time
  3. Using data tracing in conjunction with the ASP.NET and other ETW providers
  4. Doing a combined data trace and SQL Server trace using the SQL Server ETW provider
  5. Diagnosing other network protocol problems

Inside Data Tracing

You now have a cookbook way to set up, run, and interpret traces in the Microsoft data access stacks. But, aside from issuing command line scripts, what exactly is going on?

Data tracing is based on a provider model itself. ADO.NET data providers or other data access code use standard APIs (which themselves use standard trace hooks) to feed trace information into the model, and, in future, multiple data trace providers may be built. Currently only AdoNetDiag is available but you could imagine data tracing consumers that would provide a pre-filtering of events at a granular level, or use a different trace output system, such as OutputDebugString or output into SQL Server directly for ease in searching/querying. You could even hook data tracing up to the .NET System.Diagnostics.Trace.

ADO.NET 2.0 and SQL Native Client come with an adapter to the Event Tracing for Windows (ETW) system. You can trace from the managed stack into the unmanaged stack and back without a hitch. ETW is a high-performance tracing system that was originally introduced to implement kernel-level tracing for device driver writers. Here's a high-level explanation of ETW using data tracing as an example.

What is ETW?

Event Tracing for Windows is meant to provide low overhead tracing as compared with Windows Performance Monitor. ETW usually takes up no more than 5 percent of the CPU and can log up to 20,000 events per second. It's fast enough to enable tracing in real time. ETW uses a provider-based model; in this case a provider is a system or application component that sends events to the event system. Some examples of event providers are Active Directory, IIS, and ASP.NET. The ADO.NET and SQL Native Client data trace register five ETW event providers.

  1. System.Data.1—ADO.NET providers and classes in System.Data.dll
  2. System.Data.OracleClient.1—OracleClient provider in System.OracleClient.dll
  3. System.Data.SNI.B1—SNI from System.Data (this is changed to System.Data.SNI.1 in .NET 2.0 Beta 2)
  4. SQLNCLI.1—SQL Native Client providers and SNI from SQL Native Client
  5. ADONETDIAG.ETW—provides events from the ETW adapter itself

The ETW provider logs yield a timestamp with each event. When you start the trace, you can specify a high-resolution timestamp or low-resolution timestamp. In our traceon.cmd file we've chosen a high-resolution timestamp using the -ct perf option. In the "single-file" scripts above, we've chosen the default (low-resolution) timestamp. ETW chooses high performance over ease of use. Formatting ETW traces with tracerpt.exe produces a cursory decoding. The format of the ETL file is documented and the schemas for individual trace providers is recorded in WMI, so programmers are welcome to build their own specialized formatter. A nice feature of using ETW is that the trace that you produce can be used in conjunction with an ASP.NET trace, or, for that matter, a low-level OS kernel trace. All of the events can be logged on a per-provider basis to a single file for correlation or to separate files. Information on Event Tracing For Windows is available on the Performances Best Practices At A Glance page of the patterns & practices site.

ETW output can be consumed by a variety of tools, and if none of the tools suit your specific needs, you can build your own. One example of such a consumer tool is logparser. Logparser cannot only consume output from ETW, but other outputs like IIS log files and Windows Event Logs. Logparser then lets you query your events using a SQL-like syntax. Logparser is available as part of the IIS 6.0 Resource Kit Tools.

Conclusion

In the ADO.NET 2.0 release, tracing has been added at the provider level, network protocol level, and also to some components used in database access, such as the ADO.NET DataSet. The model's flexibility will accommodate different trace engines in the future, and the API is straightforward so that provider writers and user application writers can add tracing to their products. The current implementation is to use ETW with tracing, and provides a low-level, minimal disruption means of gathering the trace point events that are exposed. Different ways of post-processing the trace data or filtering the trace at execution time will allow the trace to be used by such diverse groups as in-house software support diagnosticians and Microsoft Product Support Services.

The integration of data tracing with current trace-gathering facilities, as well as the extensibility to future trace-gathering facilities, makes for a useful and long-lived tool in the troubleshooter's and administrator's toolbox. Happy tracing!

Bob Beauchemin is an instructor, course author, and database curriculum course liaison for DevelopMentor. He has over twenty-five years of experience as an architect, programmer, and administrator for data-centric distributed systems. He's written articles on ADO.NET, OLE DB, and SQL Server for Microsoft Systems Journal, SQL Server Magazine, and others, and is the author of A First Look at SQL Server 2005 for Developers and Essential ADO.NET.