The Quest for ASP.NET Scalability

 

Michèle Leroux Bustamante
http://www.idesign.net

May 2004

Applies to:
Microsoft® ASP.NET
Microsoft® Windows® 2000 Server
Microsoft® Internet Information Services (IIS) 5.0
Microsoft® SQL Server™ 2000

Summary: In this article, Michèle looks at some of the architectural and design decisions that may affect ASP.NET application scalability. In addition, she looks at how you can use Enterprise Services and MSMQ to reduce the effect of those scalability problems. (15 printed pages)

Download the sample code.

Contents

Introduction
How Many Threads Does It Take to Slow an Application?
How Do I Scale Thee...Let Me Count The Ways
A Little Case Study in Scalability Architecture
Conclusion
Related Books

Introduction

At the height of the dot-com boom in the mid-1990's, many companies burst onto the scene as Application Services Providers (ASPs) hoping to capitalize on the wave of Internet success stories, and rake in some loot. Aside from the unfortunate market collapse that followed, ASPs had other issues to deal with such as training staff to effectively build and manage a secure, reliable and highly available operation. Today, Web-enabling architecture is prevalent, particularly since Web services have become a staple for most applications, and the growing pains of ASPs past are feeding businesses everywhere with a hunger for successful implementations for 24x7 applications. As Service-Oriented Architectures (SOA) themes spread across organizations, applications are now reaching wider audiences than ever before. Gone are the days where you can sling some code, ship product, and cross your fingers.

Consequently, most developers are likely to touch some part of an enterprise application that must service an unpredictable number of connections, sessions and page requests. It has never been more important to gain well-rounded programming skills that include component design, architecture planning, and clear perspective on what makes an application scalable, available and secure. Today it is what will often make or break the success of an operation, or even a business. But, you know what they say, "If you're not part of the solution, you're part of the problem." So allow me to provide you an overview of some scalability best practices, while describing ideas for scaling your Microsoft® ASP.NET applications using Enterprise Services, COM+ and MSMQ.

How Many Threads Does It Take to Slow an Application?

Before you begin writing a line of code for an enterprise Web application there are a few things you should understand about the ASP.NET processing model, how it interacts with IIS, and how many threads are running as each unique page request is processed. This information will help you determine how to architect your application, when to spawn threads that don't draw from the thread pool, when to use asynchronous messaging, and when to move processes to another physical tier in the system.

Let's take the simple case of an ASP.NET application that is entirely hosted on a single physical tier. Consider these two configurations:

Web Server Configuration Server Applications
Microsoft® Internet Information Services (IIS) 5.0 Microsoft® Windows® 2000 Server, IIS 5.0, ASP.NET, Microsoft® SQL Server™ 2000
Microsoft® Internet Information Services (IIS) 6.0 Microsoft® Windows® 2003 Server, IIS 6.0, ASP.NET, SQL Server 2000

Figure 1 shows how IIS 5.0 deals out requests, and how processes, application domains and threads manage the round trip by default. Figure 2 shows the same for IIS 6.0.

Click here for larger image.

Figure 1. Requests with IIS 5.0

Click here for larger image.

Figure 2. Requests with IIS 6.0

Under IIS 5.0 configuration, ASP.NET resource requests are received by inetinfo.exe and passed to the ASP.NET worker process (aspnet_wp.exe) to handle. IIS 6.0 receives requests through the low-level http.sys kernel and directly forwards them to the w3wp.exe process. In both cases, the result is a worker process that hosts the ASP.NET runtime. Applications run within an application domain that owns a pool of HttpApplication objects to service each request. The HttpApplication object is responsible for loading any configured HTTP modules, and for loading HTTP handlers that process individual requests. The worker process owns a thread pool that is drawn from to process concurrent requests, handled by one of the pooled HttpApplication objects. When requests use up all thread pool threads (the default configuration for this is usually 25 threads) the thread pool is boosted up to the maximum configuration setting, or requests are queued. Each IIS application gets its own application domain with its own HttpApplication pool, however, they run within the ASP.NET worker process and share its thread pool. For IIS 5.0, a single worker process is hosted per CPU, but greater scalability is achievable with IIS 6.0 since more worker processes can be allocated on a single CPU. In either configuration, at some point with enough concurrent requests the thread pool can be exhausted causing requests to be queued. You can suck additional throughput with additional IIS, Window Server, and ASP.NET configuration tweaks, but I'll focus on just the processes and threads that service requests.

Assuming that the application relied on a database, the thread that handles an HTTP request may also communicate with the database engine, which usually supports its own connection pooling and queuing mechanisms to handle load. The database server requires great attention to scalability, which means monitoring inserts per second and other statistics such as insert footprint, optimizing indexes, tweaking connection pooling configuration, and more other important activities best performed by an experienced DBA.

Across each of these process boundaries (web server, application instance and database server) and the many threads each of them spawn to handle requests; you can probably imagine that at some point performance will degrade unless we find ways to distribute the workload.

How Do I Scale Thee...Let Me Count The Ways

Scalability is as much about network topology and hardware as it is about software architecture. What great software architecture brings to the table is the ability to suck the most power out of a single application stack. Now that you understand more about the number of processes and threads that comprise a single round trip, I'll talk a bit about how we can gain better performance and improved reliability from that processing model.

Tiers and Physical Redundancy

An enterprise solution should at least have two tiers. In a two-tier scenario, the Web server tier could host...well...the Web server (usually IIS), the Web application, and possibly business and database access components. The database server tier allows you to offload the heavy lifting of the database engine to another, usually more powerful piece of equipment. In reality, two tiers is rarely enough for an enterprise solution that takes a beating. Chances are there will be some business processing that puts greater demand on system resources, such as file I/O operations, heavy number crunching, and integrated system calls. So, there will traditionally be at least three tiers allowing the Web server to focus on simpler requests, delegating business component interactions to a distributed application tier.

Regardless of the application architecture, every tier should have at least one redundant equivalent. That means, as shown in Figure 3, everything from firewall, routers and network load balancers to the machines deployed at each tier.

Click here for larger image.

Figure 3. Fully redundant system

Redundancy combined with hardware load balancing in active-active configuration (meaning, all redundant machines are available to process sessions) makes it possible to delegate requests between several machines. With active-passive configuration, a single machine handles all requests and a passive machine waits to be activated upon the primary machine's failure. This can still satisfy availability goals, however it obviously doesn't help to reduce load on the primary machine.

Any physical tier of the network architecture should also be configured so that additional machines can be added to support horizontal scaling at that tier. For example, if the application tier (the heart of the application) begins to reach maximum thresholds for memory, CPU or other expensive resources, adding machines at this tier can increase overall throughput. Individual tiers may have different horizontal scaling thresholds because of the type of resources they consume, and because of the type of equipment at that tier. For example, the database tier typically uses servers with more horsepower (additional memory, more processors, etc.) thus they are more costly, and less likely to be candidates for widespread horizontal scaling. An application server that requires file IO or heavy number-crunching may offload this heavy work to yet another one or more tiers that can be easily scaled to meet peak load demands. An example of this would be to offload components that generate large reports, create and persist document, or send SMTP mail messages.

Addressing Slower Page Loads

Network architecture aside, you want your application to serve up page requests as quickly as possible. By default, page requests are handled by a thread from one of the available application domain threads. That means that requests may queue up when all of the available application domains and related thread pools are exhausted. Slower page requests that hog a thread from the thread pool inevitably bring down average page load statistics for the Web server, but if a particular page is known to require additional server-side processing during the round trip, you can offload this page's processing to a new thread, separate from the thread pool. Each Page object implements IHttpHandler and is by default processed synchronously, but you can implement the IHttpAsyncHandler interface on Page objects that are draining your page load statistics. You would write code to invoke page processing on a custom thread, rather than one from the thread pool, which lets any queuing requests within the application domain grab a thread from a pool and go! See this MSDN article for more information on asynchronous Page handlers.

In a similar fashion, Web methods can be invoked asychronously, to release the thread pool during Web service round trips. This is done by implementing Web methods with an asynchronous design pattern (see this article on MSDN). But, the ability to release the thread pool, only to spawn other custom threads, is not guaranteed to measurably improve performance. All pages or Web method calls that rely on file I/O, database or otherwise potentially expensive activities shouldn't automatically be made asynchronous. You should address the application architecture first and monitoring page load statistics before going this route. The reason being that these asynchronous patterns still spin another thread within the worker process and the resulting is more context-switching that will impact performance as well. Limited gains received from releasing the thread pool can be addressed with asynchronous messaging, or by offloading component activities synchronously or asynchronously to another tier in the system stack.

Reducing Bottlenecks

Behind every page load lurks potential processing bottlenecks. While you're thinking through application workflows, you have an opportunity to get the application architecture right, to avoid performance penalties and to simplify component distribution changes and maintenance. Here are a list of some application processing scenarios and some high-level approaches to application architecture that can increase performance and reliability therin:

  • Heavy Database Load. A qualified DBA is very much needed to tailor performance tuning activities to meet the needs of the application (and believe me, it is not easy to find a fabulous DBA!). However, it is also possible to mitigate performance concerns by deploying the database engine on a separate tier, with the right hardware configuration including mirrored drives (shown in Figure 1).
  • Long Running Operations. Operations such as database queries or inserts involving large resultsets, heavy number crunching and remote invocations can cause messages to be queued causing delays in responses. These activities should be considered for asynchronous messaging. Memory is volatile and servers can fail, these are harsh realities. To mitigate the risk of losing request data during a round trip, to insure reliable processing of that data, and to offload the work from the ASP.NET worker process, you can employ Microsoft Message Queuing (MSMQ) easily from the .NET framework with System.EnterpriseServices.
  • Resource Intensive Features. Sometimes we have to hit the file system, for example, generating reports or PDF documents may ultimately require persisting file output. Number crunching can also be a resource intensive process, consuming large amounts of memory and consuming CPU cycles. Both of these are different examples of resource intensive features that may need to be offloaded to another physical tier. Employing MSMQ and COM+ once again with the help of components available in the System.EnterpriseServices namespace, you can offload work to other tiers in a reliable architecture.
  • Server Down Conditions. Yes, it happens, servers go down, and MSMQ can help you recover in several ways. First, messages can be recorded (serialized) so that if a server goes down, upon restart those messages are ready and waiting to be played. Second, if a queue is trying to invoke a component on another tier that is currently unavailable, or an exception occurs, messages are passed through a series of retry queues before finally resting in a final, dead letter queue. Of course, there are a number of ways to configure this, but the thrust of this is that no message is lost.
  • Distributed Transactions. With all of this talk about application tiers and component architecture, I would be remiss if I left out the need to manage distributed transactions. Luckily, COM+ components have built-in capabilities that leverage the Microsoft Distributed Transaction Coordinator (DTC).

By employing the right network architecture and equipment, combined with some combination of multithreading, message queuing, distributed application processing and loosely coupled events, your application has the potential to scale better and provide the kind of reliability customers expect.

In the remainder of this article, I will give you an overview of a sample application I developed that employs some of these concepts in applied scenarios. Consider this a starting point to tickle your interest in solving some of the scalability and reliability problems I have discussed so far with sound architecture and component design.

A Little Case Study in Scalability Architecture

Theoretical best practices in application scalability for ASP.NET are easy to recommend, but when it comes down to it, you have to evaluate the needs of each application to determine where best to employ the concepts I've been discussing. With that in mind, I bring you the sample file upload ASP.NET application. This application is simple enough not to confuse you with hoards of detailed application code, yet has a set of easily identifiable needs for current and future scalability so that my use of Enterprise Services is not altogether contrived.

The sample is an ASP.NET application that supplies a file upload page allowing users to upload a file with some descriptive information. The application allows users to upload files through a Web browser, where the file is persisted, and a record recording the details of the upload activity are inserted to a FileTransactions table. Additional details supplied with the file upload such as title and description are recorded in a corresponding record in a FileUploads table. The architecture is shown in Figure 4.

Click here for larger image.

Figure 4. Architecture of FileUpload application

Requests are processed synchronously, and business components live on the same tier to handle file system and database interaction. In the following sections, I'll discuss how several features of Enterprise Services are employed to increase scalability and reliability of this application. If you are new to MSMQ and COM+, consider this a high-level introduction of their value and application to solve scenarios discussed in this article. These scenarios are:

  • Letting long running activities run asynchronously with MSMQ.
  • Distributing activities across tiers using COM+.
  • Insuring system consistency across tiers with COM+ transactions.

Asynchronous Messaging, Built-In

MSMQ is a more scalable and reliable vehicle than multithreading for asynchronous activities. In part, this is because message queues can be serialized, thus are fault tolerant, but let's not forget that they can also invoke remote components, and queue messages offline while those components or systems are not available.

In the case of this code sample, let's say the calling client (the Web page) doesn't care how long it takes to upload the file, and need not be notified when the job is completed. After a file is uploaded it must be saved to disk, and a database record inserted with additional details. This is where I implemented asynchronous messaging with MSMQ.

The FileUploadMgr component is registered as a COM+ application, configured to support message queuing. When the COM+ application is registered, a default private queue (including retry and a dead letter queues) are created for the application, which can be viewed from the Computer Management MMC snap-in. The UploadMgr class inherits System.EnterpriseServices.ServicedComponent, and is automatically configured as a registered component for the application. You can supply metadata that describes the COM+ application and component attributes, so that registration can use it to configure them. I'll explain some of the .NET attributes that I applied to each registered component, and its assembly.

In the assemblyinfo.cs file of each project in the sample, the following attributes are applied to the assembly:

[assembly: ApplicationName("FileUploadApp")]
[assembly: Description("Application demonstrates distributed 
application to receive uploaded files and record them as upload 
transactions. ")]
[assembly: ApplicationActivation(ActivationOption.Server)]
[assembly: ApplicationQueuing(Enabled = 
  true,QueueListenerEnabled = true)] 

This metadata configures the COM+ application name and description, while also setting the application to run in a server process, and enabling a message queue listener and queue processing. Because these attributes are set in all three assemblies, components are registered under the same COM+ application.

Note Common Enterprise Services questions including those about registration are answered on GotDotNet.com in this FAQ: https://www.gotdotnet.com/team/xmlentsvcs/esfaq.aspx.

You can also avoid manually editing COM+ configuration settings by including metadata on other types. Registered components inherit ServicedComponent, and also implement a queuing interface if message queuing is supported for the component. Queued components implement an interface decorated with the InterfaceQueuingAttribute. This interface mustn't violate requirements of a queued component (for example, return values and out/ref parameters are not supported).

The following is the applicable definitions that make UploadMgr a queued component:

[System.InteropServices.InterfaceQueuing]
public interface IUploadMgr
{…}

public class UploadMgr: ServicedComponent,IUploadMgr
{…}

You can use the Component Services configuration snap-in, but the following batch commands properly install FileUploadMgr in the GAC and then register it for COM+:

gacutil.exe" /i fileuploadmgr.dll
regsvcs.exe" /c fileuploadmgr.dll

FileUploadMgr is the primary business component that the Web form interacts with to upload the file. When the user posts back to the Web form, the following code queues a request to UploadMgr.UploadFileInfo():

         FileUploadMgr.IUploadMgr obj;
         obj=(FileUploadMgr.IUploadMgr)
            Marshal.BindToMoniker("queue:/new:FileUploadMgr.UploadMgr");
         obj.UploadFileInfo(this.txtTitle.Text, this.txtDescrip.Text, 
            filename, data);
         Marshal.ReleaseComObject(obj);

BindToMoniker is a utility function that returns the requested object referenced by the moniker provided. In this case, the moniker invokes the COM+ component registered with the progid FileUploadMgr.UploadMgr. This is cast this to its IUploadMgr interface, which supports queuing, and with this the call to UploadFileInfo() is queued and the round-trip can return.

The result of this is faster response time for the page, but the queue also provides reliable storage of all upload transactions, to insure no messages are lost while awaiting processing. The COM+ application has a listener configured to play messages from the queue, which will instantiate the UploadMgr component and invoke UploadFileInfo(). If the system becomes unstable, the queue can be processed when the machine is restarted, picking up where it left off. Furthermore, if an exception occurs during the method call, retry queues are available to attempt the method call repeatedly until finally resting in the dead letter queue where administrators can take appropriate action.

In short, the benefits to this application architecture include:

  • Response time is not affected by long running operations.
  • Messages are never lost and can be replayed for success.
  • COM+ components can be moved to another physical system tier and still function through a local message queue, without impacting application code.

If it were preferred to have this functionality invoked synchronously, benefits can still be gained with performance by invoking the COM+ component directly, however configuring it on another queue to offload work to another system and reduce file IO contention on the Web server.

Distributed, Not Distributed, It's All The Same?

One of the advantages of registering COM+ components to invoke layers of the application architecture is the ability to distribute those components in different topologies to meet performance requirements. Figure 5 and 6 demonstrate the transition between single tier invocation of the sample's components, versus moving the business processing components to a middle application tier.

Click here for larger image.

Figure 5. Business processes in one tier

Click here for larger image.

Figure 6. Business processes in separate tier

If scalability demands require distributing file IO, database and other business logic to a middle application tier to balance workload distribution, the same COM+ components can be deployed remotely and invoked by a local message queue. In fact, it would also be possible to distribute file IO to yet another physical tier, if it needs to scale horizontally at a different rate than the other layers in the system stack. Designing applications to support this type of flexibility up front means moving from the architecture in Figure 5 to that in Figure 6 does not require a lengthy development and QA cycle before it can go live on production servers and begin reaping benefits.

Transactions Made Easy, Sort Of

Aside from the scalability benefits of implementing message queuing and registered components with Enterprise Services, transaction support also makes it possible to coordinate certain activities across components as part of an atomic transaction, even if those components are distributed. The sample leverages transactions to coordinate updating the database, with successful file archive. UploadFileInfo() performs the following actions:

  • Save a "Pending" transaction to the FileTransactions table with a unique transaction identifier for the upload.
  • Archive the uploaded file to the file system through the FileUploadMgr component.
  • If #2 is successful, record the file upload details to the FileUploads table, referencing the same unique transaction identifier from the initially created FileTransactions record. All database activities are handled by the FileUploadDALC component.
  • If #2 and #3 are successful, update the FileTransactions record with status "Completed".
  • If #2 or #3 fail, update the FileTransactions record with status "Failed".

Step #1 should always execute. Steps #2, 3 and 4 must be treated as one atomic transaction, and if that transaction fails, step #5 (update the FileTransaction to Failed) should be executed. So, this is where we get to take a peek at how we can enlist these COM+ components in a transaction, without breaking a sweat.

Machines that run COM+ also have Microsoft's Distributed Transaction Coordinator (DTC) running as a service. DTC manages transaction-related activities, including interacting with resource managers such as SQL Server, and collecting votes for transaction commit or rollback as part of the 2-phase commit (2PC) process. During the first phase of a transaction, resources are enlisted to prepare to commit. The second phase is when commit or rollback activities are actually executed, as resource managers are notified by DTC after all votes have been collected. You can enlist your managed components in transactions using automatic transactions (declarative attributes) or manual transactions (lots of work!). Automatic transactions means applying .NET attributes to components that should participate in transactions when invoked. Attributes can also be applied to methods to automate voting on the current transaction context to commit or rollback. Resource managers such as SQL server interact with DTC to receive notification of 2PC events, and have built-in mechanisms to handle each phase.

By applying the TransactionAttribute to UploadMgr with default settings, TransactionOption.Required is applied to the Value property of the attribute, so that this component will become the root of any new transactions, or participate in the calling transaction context if applicable. So, applying this to UploadMgr means that UploadFileInfo() becomes the root of a new transaction. That also means that all calls made from UploadFileInfo() to other components (to save the file and write database records) will be treated as a single transaction and rolled back if one fails, so long as those components also support transactions. More specifically, all database insertions will be rolled back if one of them fails, or if the file saving operation fails.

The problem with this approach is that steps #1-5 will be considered atomic, when I really want steps 2-4 to be atomic. The result is that there will not be a FileTransaction record persisted indicating that the upload took place, for reporting purposes. How can we report on Pending, Completed or Failed transactions in real-time? If an exception occurs and all database insertions are rolled back, how can we update the FileTransaction record to have "Failed" status? We can't drive this from the Web form, since the entire process is delegated to a COM+ component, asynchronously. We can't accomplish this from within the UploadFileInfo() method either, because if in the context of a washed-up transaction, all resource managed activities (the database in this case) will be rolled back. So, no database insertions can be performed even if we catch the exception thrown by the failed transaction.

The issue here is that automatic transactions (that is, the TransactionAttribute and AutoCompleteAttribute) don't give us granular control over how we commit and rollback. Using automatic transactions we have a simple programming model for controlling an otherwise complicated process. However, we can't nest transactions, or apply the initiation of a transaction to a particular method on a class, or dynamically at runtime suppress parts of our code from participating in a transaction. Writing manual transaction code we could control this, but the benefits are hardly worth the risk of introducing complexity and errors, not to mention lack of productivity.

To leverage automatic transactions yet still achieve the goals of steps #1-5, I made some changes to the code. The transaction root was originally on the FileUploadMgr, but I created a utility class within the same assembly called UploadUtil and registered it as a serviced component to participate in COM+ transactions. UploadMgr drives the process of invoking UploadUtil methods in the order expected as shown here:

   public class UploadMgr: ServicedComponent,IUploadMgr
   {
      public void UploadFileInfo(string title, string descrip, 
         string filename, byte[] data)
      {
          
         UploadUtil util = null;
         int transId = -1;

         try
         {
            // record Pending transaction record
            util = new UploadUtil();
            transId = util.CreateFileTransaction();

            // process file upload
            filename += ".trans" + transId.ToString();
            util.RecordFileTransaction(transId, title, 
              descrip, filename, data);

         }
         catch (Exception ex)
         {
            MessageBox.Show(ex.ToString(), "Transaction Failed");

            // record Failed transaction record
            if (util != null)
               util.RecordFailedTransaction(transId, 1);
            throw;            
         }
      }
}

Calls to UploadUtil methods initiate new transactions since the component supports automatic transactions and each method handles voting automatically thanks to the AutoCompleteAttribute on applicable methods:

   [Transaction]
   public class UploadUtil: ServicedComponent
   {
      [AutoComplete]
      public int CreateFileTransaction()
      {
         FileUploadDALC.IFileUpload obj = new FileUploadDALC.FileUploadDb();
         
         int transId = obj.CreateFileTransaction();
         return transId;
      }

      [AutoComplete]
      public void RecordFileTransaction(int transId, string title, 
        string descrip, string filename, byte[] data)
      {
         
         FileArchiveMgr.IArchiveMgr filesaver= 
            new FileArchiveMgr.ArchiveMgr();
         filesaver.SaveFile(title, descrip, filename, data);

         FileUploadDALC.IFileUpload obj = new 
           FileUploadDALC.FileUploadDb();
         obj.SaveFileInfo(transId, title, descrip, filename);

         obj.UpdateFileTransaction(transId, 2);
            
      }

      [AutoComplete]
      public void RecordFailedTransaction(int transId, int status)
      {
         FileUploadDALC.IFileUpload obj = 
           new FileUploadDALC.FileUploadDb();
         obj.UpdateFileTransaction(transId, status);
         
      }

   }

When exceptions are thrown up to the UploadMgr component, exception handling code issues a new transaction to update the database with a Failed status for the current transaction identifier.

RecordFileTransaction() specifically performs those steps that should be atomic. Calls to ArchiveMgr and FileUploadDb methods enlist those components to participate in the calling transaction context. However, since the file system is not a resource manager, no action is taken on rollback.

   [Transaction]
   public class ArchiveMgr: ServicedComponent,IArchiveMgr
   {
      [AutoComplete]
      public void SaveFile(string title, 
         string descrip, string filename, byte[] data)
      {…}
}

      [Transaction]
      public class FileUploadDb: ServicedComponent, IFileUpload
{
         [AutoComplete]
         public int CreateFileTransaction()
         {…}

         [AutoComplete]
         public int SaveFileInfo(int transId, 
           string title, string descrip, string filename)
{…}

         [AutoComplete]
         public void UpdateFileTransaction(int transId, int status)
         {…}

}

One of the benefits of COM+ transactions is the fact that they support distributed scenarios, such that shown in figures 5 and 6.

Conclusion

If I've done my job in this article, you will now have a high-level understanding about some of the design challenges developers face when building scalable and reliable applications. Hopefully for those new to Enterprise Services, MSMQ and COM+ you'll be inspired to learn more so that you can apply your newfound knowledge during the design phase of your next project. Some of the resources I mention below, should help you on your way and with dedicated time playing with each of these concepts, you'll be better prepared to work application's scalability issues.

Other resources

 

About the Author

Michèle Leroux Bustamante is Principal Software Architect of IDesign Inc., Microsoft Regional Director for San Diego, Microsoft MVP for XML Web Services and BEA Technical Director. She has over a decade of development experience development applications with Microsoft® Visual Basic®, C++, Java, C# and Visual Basic.NET and working with related technologies such as ATL, MFC and COM. At IDesign Michèle provides training, mentoring and high-end architecture consulting services, focusing on ASP.NET, Web services and interoperability, and scalable and secure architecture design for .NET applications. She is a member of the International .NET Speakers Association (INETA), a frequent conference presenter, conference chair of SD's Web Services track, and is frequently published in several major technology journals. Michèle is also Web Services Program Advisor to UCSD Extension, and is the .NET Expert for SearchWebServices.com. Reach her at mlb@idesign.net, or visit www.idesign.net and www.dotnetdashboard.net.

© Microsoft Corporation. All rights reserved.