.NET Enterprise Services and COM+ 1.5 Architecture

 

By Michael McKeown
Microsoft Corporation

June 2003

Applies to
   Microsoft® .NET Development
   XML Web Services

Summary: Discusses Microsoft .NET Enterprise Services and its use of COM+ middle tier services. Introduces new COM+ 1.5 features as well as design guidelines to take full advantage of these new capabilities in .NET applications. (45 printed pages)

Contents

Introduction
.NET Enterprise Services and COM+
Why a New Version of COM+?
Configurable Isolation Levels
Legacy Component Support
Activation Context
Application Recycling
Disabling or Pausing an Application
Applications as NT Services
Process Dump
Component Aliasing
Public and Private Components
Low-Memory Activation Gates
Process Initialization
Services Without Components
Web Services
Conclusion

Introduction

COM+ 1.5 can be viewed as the chrome on COM+ 1.0. While COM+ 1.0 brought with it internal improvements like contexts, a new thread-pool architecture, and object pooling, COM+ 1.5 carries more visible and functional features for use in a distributed application. If used correctly, these improvements can result in increased performance, better reliability, increased scalability, and simpler manageability of your application.

Some of these now standard features were previously implemented under COM+ 1.0 only by tricky code or administration techniques. For instance, a component can be designated as "private" to your application to prevent it from being created by callers outside that process. Previously this was smartly done by enabling security for a component but not assigning roles to it. Another example is component aliasing, allowing you to have the same binary image in multiple applications on the same machine. Before COM+ 1.5, it took source-level magic with additional ProgIDs and CLSIDs to make this possible.

A majority of these concepts are truly innovative and allow you to take new liberties within your application architecture—like being able to create pools of applications, exposing your COM+ application as a Web service or as a Microsoft® Windows® NT service, and even using COM+ services without your application being configured in COM+. While COM+ is powerful and feature-rich, it's critical to understand the correct application of these features, as it can make or break your application with just the check of a box. This article gives you the knowledge to make the correct decisions in your application design.

To limit the size of this document, I've made the supposition that the reader has a solid understanding of MTS and COM+ 1.0 concepts. Should you seek a better understanding of these, please refer to my MSDN article, COM Threading and Application Architecture in COM+ Applications.

.NET Enterprise Services and COM+

When Microsoft® Transaction Server (MTS) was initially released, some folks looked at it and scoffed, "I'm not doing anything with transactions so don't bother me." Despite the use of "transaction" in its name, MTS was about a whole lot more than just transactional support for components. The fact that it shipped as a separate release from the Windows NT operating system only increased its mystique.

Unlike its predecessor MTS, COM+ ships with the operating system—the 1.0 version with Windows 2000, and the 1.5 version with Windows XP and Windows Server 2003. (Throughout this article I will use the term "Windows XP/Windows Server 2003" to uniformly discuss the COM+ 1.5 platform.) For those who did not previously use MTS as part of their architecture, this bundling has occasionally led to confusion: Should I use COM+? Do I have to use it? How do I use it most effectively?

With the release of the Microsoft® .NET Framework and its auto-everything, attribute-based programming model, this confusion has been further fueled. Developers and architects are asking questions similar to those asked when Microsoft® ActiveX® suddenly appeared a few years back and clashed with OLE and COM. There is an initial common misunderstanding among the relationship between COM+ and Microsoft .NET: How does .NET fit into COM+? Does it replace COM+? Do I even need COM+ anymore?

Well, you can relax. All the features and services that COM+ provides are available to you in the .NET Framework. The .NET Framework does not replace COM+, for it is dependent upon COM+ for all its middle-tier component services. The .NET Framework provides an environment for managed code to make use of COM+—as well as other enterprise service technologies like Microsoft® Internet Information Services (IIS) and Microsoft® Message Queue server (MSMQ)—more easily than could be done in the past.

To add to the confusion, the .NET Framework provides a managed class into COM+ called Enterprise Services (ES) within the System.EnterpriseServices namespace. This class offers ways to programmatically make use of these services in a more simplified fashion than in the past. Which version of COM+ it maps to is completely dependent upon whether the .NET Framework is running on Windows 2000 or Windows XP/Windows Server 2003. Figure 1 below shows the relationship of the technologies.

Figure 1. .NET Framework uses COM+/IIS/MSMQ for its middle tier services.

Attributes within managed code map directly to settings in the COM+ catalog and correspondingly in the Component Services Explorer (CSE). Attributes are used to properly configure a component in COM+ once it's installed. Attribute-based programming is encouraged, as it takes the burden of proper configuration off the administrator. Attributes are compiled into an assembly's meta-data, allowing a component to carry around its behavior wherever its assembly is deployed. This results in a consistent configuration across multiple servers.

You can override some of these attribute values administratively through the CSE and the COM+ catalog. However, there are four attributes that always read from the metadata when a component is loaded and will supersede any settings in the CSE. These are JIT (Just-In-Time) activation, object pooling (although it uses the pool size attribute set in the CSE), AutoDone ("Automatically deactivate this object when this method returns"), and using security at the method level. Use caution when overriding any attribute at the application or component level. The developer of the component defines a particular set of attributes to ensure his component will have the correct context and services available when instantiated so that it functions correctly.

Below is a snippet of C# code using the transaction attribute from the System.EnterpriseServices namespace. Since transactions require JIT and JIT requires concurrency, the corresponding transaction Required, Enable Just In Time Activation, and synchronization Required check boxes/radio buttons will be checked in the CSE. This configuration will occur the very first time this component is instantiated—if not already registered manually through REGSVCS.EXE. Manual registration is a great idea, as it can alert you ahead of time to registration errors that could occur the first time the application is run. Note that any attribute changes to the source code during development will not be reflected in the CSE until it is reinstalled again.

using System.EnterpriseServices;
[Transaction(TransactionOption.Required)]
public class MyComponent : ServicedComponent
{
 …
public void SillyTransactionManualMethod(bool txCode)
{  if (txCode == false)
{
MessageBox.Show("Method aborted via ContextUtil.SetAbort");
ContextUtil.SetAbort();
}
else
{
MessageBox.Show("Method committed via ContextUtil.SetComplete");
ContextUtil.SetComplete();   
}  }

For more information on applying Enterprise Services attributes to your classes, see Applying Attributes to Configure COM+ Services. You can also define custom attributes (such as department name or author of the code) and access them from consumer code through the System.Attribute class.

The functionality of COM+ 1.5 is not solely limited to a managed .NET application. Unmanaged Microsoft® Visual Studio® 6.0 code can use both versions of COM+, depending upon the platform it runs on. The bad news is that since you are not using the .NET Framework classes with Visual Studio 6.0, you most likely will have to do more manual programming to take advantage of the new COM+ 1.5 features, most of which you simply will not be able to use programmatically. Just as important, you will not be able to use attributes within your code that map to COM+ catalog and CSE configuration settings. The good news is that most of the COM+1.5 features are made available to components through CSE configuration settings and not programmatically.

Why a New Version of COM+?

In my job I spend a lot of time talking to folks about the differences between MTS and COM+ 1.x. I like to use an analogy to help them better understand this relationship. My pre-teen son Kyle is begging me to let him buy a motorcycle. (Where he'll get the $ from is an unsolved mystery.) In his mind he honestly believes that he is ready for a cycle, but from watching him drive in Microsoft® MotoCross Madness® I feel otherwise. From my experience I believe that if I were to agree to his request, he could hurt himself in the process. Well, being the "cool" dad I am, after much begging on his part, let's suppose I agree to let him buy a mini-bike but with stipulations. He can park it in the driveway and even start the engine, but cannot put it into gear.

This parallels MTS, Microsoft, and its customers. When MTS was released, Microsoft basically took its faithful single-user, single-computer developers (of which client-server can be categorized as well) and began to move them into the enterprise. To be fair, most of Microsoft itself was probably learning about the correct way to develop distributed applications just slightly ahead of its customers. With MTS, we gave the average developer a chance at developing a solid distributed application by handling all the infrastructure requirements. This allowed the customer to focus on what they did best—their application business specifics.

After a few years, Kyle has shown he can handle the mini-bike with the stipulations. He wants to take it to the next level—driving to the end of the driveway. I agree and allow him to do so. The one remaining stipulation is he cannot drive on the street, since he does not have enough experience to deal with all that will come his way. Conversely, I have listened to him and know how to better help him in this transition process.

Enter COM+ 1.0. Our single-system folks have transitioned from the client-server mindset to obtaining resources late and releasing them early with MTS. Through experience they now understand the importance of resource pooling and of not tying up server resources, making short running (transactional) method calls, and of using one object per client. Microsoft, too, has learned a lot from its customers and understands more of what they need to be successful. With COM+ 1.0, we provided our customers the ability to drive to the end of the driveway, as they were ready for it.

Kyle is now of legal age and has matured to a level that he can drive on the street without putting himself in jeopardy. He trades in his mini-bike for a motorcycle and takes off. He understands the risks, benefits, and problems associated with having all this freedom. He knows how a bike handles from his mini-bike experience.

COM+ 1.5 is your motorcycle to get out on the open road and open the throttle. It gives you all the features of COM+ 1.0 along with numerous enhancements to improve the management, availability, and scalability of your application. To get the maximum miles per gallon and safe driving for your application, your responsibility is to understand and implement these technologies in a correct fashion. This document helps you do just that, and to uphold that responsibility to your customers by creating a better, faster, and more reliable application. Let's take a look at these exciting new features.

Configurable Isolation Levels

The first COM+ 1.5 enhancement we'll visit has to do with optimizing the performance of transactional database calls from COM+ applications. Before we discuss the COM+ implementation, let's briefly discuss transactional isolation.

The "I" in transactional ACID properties refers to isolation. Simply stated, regardless of how many simultaneous clients are accessing the same data, any modifications made to its values are isolated from the view of the others until the changes are committed. Your isolation level defines the extent to which changes made to records in a database by other transactions are visible to your transactional operation. For instance, if one transaction modifies a record, but before it commits, another transaction tries to read this data, what will happen to the state of the applications running those transactional operations? This is an application-dependent answer. You could write logic in your code to control this, or more straightforwardly use locking and serialized access to the database to ensure that the "I"solation transactional property is enforced.

Adjusting the isolation level typically affects performance due to its relationship to blocking and serialization. Conversely it affects the accuracy of the returned result set data. With a lower isolation level, you sacrifice data consistency in exchange for better throughput. There are various shades of isolation ranging from totally serialized sequential access (highest data consistency protection) to allowing concurrent reading of modified yet uncommitted data (the best performance, but the least amount of data consistency protection).

Transactional duration also is an issue that can be affected by the isolation level. The higher the isolation, the longer it takes to process a transactional operation due to less concurrency between transactions. When a configured transactional component is activated, COM+ tells the Distributed Transaction Coordinator (DTC) to start a transaction and bind it to the component before any class code is executed. When making a transactional method call against a database from a COM+ 1.0 (or MTS) component, DTC sets the default isolation level to SERIALIZABLE to promote the highest data protection but non-existent concurrency. Under heavy load with many transactions hitting the same database resources concurrently, a higher isolation level means serialized data access and the requests taking longer to process. This can result in DTC timeouts on transactional operations, as the requests may simply not be completed before the DTC timeout period expires. Transactions that would normally commit with a lower isolation level may abort if using a higher isolation setting.

The isolation level is a property of the transaction, not the connection. This means that the duration of the isolation level setting is for the duration of the transaction, not the connection. When a transactional connection is returned to the connection pool, the next consumer of that connection that is part of the same transaction will inherit the isolation level. If a connection is part of a transaction, the connection pool will only hand that connection out for a request running under the same transactional context. When the transaction commits, the connection can then be enlisted in a new transaction, if applicable. It will inherit whatever isolation level that transactional component is configured to use, based upon the transaction property of the requestor's context and not on what was set during the previously committed transaction.

The default isolation level can be overridden programmatically in the actual Transact SQL (TSQL) statement (SET TRANSACTION ISOLATION) on a per SQL statement basis. COM+ 1.5 allows you to more conveniently select the isolation configuration at the binary component level. This is easier and more flexible than setting it within a TSQL statement in each method individually. More importantly, it applies to all methods and TSQL statements within that component. You must be using transactions (Requires, Requires New, or Supported) to set the isolation level. The isolation level can be set through the COM+ Admin SDK as well as within the CSE UI.

Figure 2 shows the isolation levels from which you can select. Note that additionally you can override the default machine-level DTC transaction timeout setting on a per component basis.

Figure 2. COM+ isolation levels

  • SERIALIZED. This is the default for MTS and COM+ 1.0 (although it can be changed through TSQL). This maximum isolation level allows only one transaction at a time to access rows in a table for read or write operations. There is no possibility of the data changing once the transaction begins reading it. You get a 100% accurate snapshot of data, but at the same time you are effectively blocking others from changing the data, which can hurt throughput. If your queries do a lot of read operations, this level is most likely not for you.
  • REPEATABLE READ. Until the current transaction A ends, other transactions cannot make any changes to existing record data that transaction A is reading. All the data transaction A reads must be committed data. However, other transactions can insert new records into the database while transaction A is reading. This isolation level protects transaction A from seeing those added records by providing it only the original rows available when first accessed.
  • READ COMMITTED. Like REPEATABLE READ, but other transactions can modify existing record data that transaction A is reading. Transaction A can still only read data that has been committed.
  • READ UNCOMMITTED. Like READ COMMITTED, but transaction A can read data that has been modified by other transactions yet not committed. You would use this for a query where you are less concerned about getting an accurate snapshot of data from the table and want to increase throughput by avoiding blocking anything else.
  • ANY. The component is indifferent to the isolation level within which it is being called. If this instance is instantiated as the transactional root, then it will use the default of SERIALIZED. If a transactional leaf, it can accept any isolation level.

The root component node of the transaction determines the isolation level. All leaf component nodes must have an isolation level greater than or equal to that of the root or COM+ will not allow them to be instantiated. If transactional component A creates transactional component B, component B's isolation level must be greater than or equal to its root's level, or it will fail with E_ISOLATIONLEVELMISMATCH. If your component can run with any isolation level, but may not always be a transactional root (marked as "Required" versus "Requires New" in its transaction property), you may want to mark it as ANY. This allows it to run in any isolation level and be instantiated by components with any isolation level. Be careful if calling from a COM+ 1.5 to a COM+ 1.0 system. A 1.0 component uses SERIALIZABLE by default and if a called 1.5 component uses a lower isolation level, the 1.0 component will not be instantiated. Of course, you can override to a lower isolation level explicitly in the 1.0 code if it does not compromise the integrity of your data.

COM+ 1.5 allows you to choose the optimal isolation level for the type of operation you are executing and for your code logic. For instance, suppose you have an operation that may attempt to read a row that could have been deleted by another transaction. If your code is written to properly handle this situation, you could configure a more relaxed isolation level, such as READ_COMMITTED, to possibly increase performance.

If your application dictates, you may be able to compensate for relaxed isolation levels within your middle-tier code. By writing custom compensation or error-handling code to cover conflict scenarios for returned inconsistent data, the database does less work with respect to locking and isolation of your transactional operation. This allows you to use a lower isolation level, possibly yielding scalability or concurrency benefits.

Unfortunately, application code support of a lower isolation level is usually not practical, since the middle tier is typically unaware of other database users or transactional operations working on the data it is using. This code can become very complex and in some cases cannot even be managed at the application level. If you can't do this work programmatically, choose the highest isolation level your application requires and let the database do all the isolation management for you. But ensure your performance does not suffer, especially if your transactional operations span multiple rows and/or tables.

As you can see, when deciding upon the isolation level, it's a tradeoff. A higher isolation level means less concurrency and decreased throughput, but yields higher database consistency. Look carefully at the types of queries being executed. If those queries cannot accurately work with changed but uncommitted data, the best choice is to select a higher isolation level to support this. If certain pages or rows of data in a query do not change very often, or you are only changing a row or two of data, performance will typically increase by selecting a more optimistic isolation level without locking. Don't use transactions for read operations—only for write operations. For components that have both transactional and non-transactional methods, consider using component aliasing (discussed later) and setting not only the transactional, but also the isolation, properties accordingly. Alternatively, you may want to consider Services Without Components (SWC) discussed at the end of this article.

A non-configured COM component making a call into Microsoft® SQL Server™ will use the default isolation level for SQL Server of READCOMMITTED. If configured in COM+, it will use the default level of SERIALIZABLE. Although a more secure locking mechanism, at times this has led to strange behavior or a decrease in performance that has been improperly blamed on COM+. Database isolation levels are defined by ANSI SQL-92 and thus supported by most major database vendors. That does not necessarily mean, however, that every resource manager supports all levels of isolation.

Legacy Component Support

Application Pooling

Legacy components are those components whose threading model key in the registry is missing or is marked as "Single." These COM servers are typically leftover from the single user days of OLE (Object Linking and Embedding). They were not written for a multi-user, multi-threaded environment like COM+, IIS, or .NET. But they are still sparsely used today in enterprise applications out of a functional dependency. To handle their aversion to multiple threads and reentrancy, COM binds all class instances of a component marked "Single" to the Main STA of a process. They are all serviced by the single message loop of the Main STA, resulting in increased serialization and decreased throughput.

If you have one of these dinosaurs in your application, you should try to rewrite it using Visual Studio .NET. If you can't, performance and robustness may yet be improved by running it in multiple processes through COM+ 1.5 application pooling. Each activation request on a server-activated component using application pooling will create a new DLLHOST process, up to the maximum configured process value. After the maximum processes have been created, incoming activation requests are randomly dispersed using a round-robin algorithm to the multiple DLLHOST processes. (If a user manually starts a process through CSE, it will be counted towards the pooled limit).

It may help to view application pooling more generally as "process-level" load balancing. This is a more granular, yet rudimentary, level of load balancing. If using Application Center component load balancing, application pooling takes place one level below it, at the component level.

Application pooling improves fault isolation and availability. When one process crashes, any existing pooled processes for that same application remain up and their clients are unaffected. For single-threaded legacy components, this technology allows better scaling and availability in a fashion similar to that of object pooling for free (or neutral) configured components.

Here are some issues to be aware of when using this feature.

  • Be careful when using application pooling within a legacy component. Its internal architecture and logic may contain code that was most likely written under the assumption that it is running in only one process on a machine. When using the application-pooling model, multiple instances in different processes can concurrently coexist. You are more prone to see this when different instances are trying to access a resource once assumed to be exclusive, but which is now shared. A file handle or a critical section may cause resource-sharing issues. Access serialization due to locking of the registry may occur. This may also be the situation for a COM+ 1.0 component running under COM+ 1.5 and using application pooling.
  • An application configured to run as a service cannot take advantage of application pooling.
  • Memory used by the Shared Property Manager (SPM) is process specific. Application pooling may impact any application that assumes it is using the only instance of the SPM on that machine. There is no longer any common highest-level data store (since components can span processes) for all instances of a COM+ component using application pooling. Alternatively, you can use a cached middle-tier database to store common state that will not only span instances in a process but processes as well. When doing this, you may want to consider using a pooled component that keeps a persistent connection to a database specifically for middle-tier serialization operations. In reality, this is a much better choice even without application pooling, due to the issues surrounding locking and performance of the SPM.
  • Use care in how you execute a shutdown from within the CSE. If you right-click on the application and click Shut down, you will shut down all running process instances for this application. If shut down from the "Running Processes" folder, you will shut down only that individual process.
  • Conversely, if you manually start a pooled application by right-clicking in CSE, COM+ will immediately create all the processes as specified up to the pool size setting. This is different from instantiating a component from code where the first instance is created, the first process is created, and that's it. When the second instance is created, the second process is as well. This process continues all the way up to the maximum pool size setting.
  • With application pooling, you can no longer assume that two instances of the same class instantiated by the same client will be in the same process.
  • Library applications inherit the application-pooling context of its activator.

Legacy Components Folder

In addition to offering support for legacy components in the form of application pooling, COM+ 1.5 has a new Legacy Components folder under each application.

One of the complaints in COM+ 1.0 was that applications (and packages in MTS) using non-configured (legacy) components could not be easily deployed with related configured components. Usually this resulted in a custom and separate additional installation program that required more work, installation problems, and user confusion. COM+ 1.5 makes this deployment process easier by allowing you to add a non-configured (including a legacy) component to a COM+ application primarily for the purpose of deployment. By right-clicking on the Legacy Components folder, you can effortlessly add an unconfigured (including a legacy) component to that application through the Legacy Component Import Wizard. You can add a legacy component to only one COM+ application on a machine.

Once added, this component is not considered a fully configured component and has extremely limited access to COM+ services. For instance, it can't use transactions or JIT. Its property pages are completely different from that of a configured COM+ component. They are basically a UI into its registry entries, very similar to a COM server viewed with DCOMCNFG. The functionality of the DCOMCNFG utility has been moved into the COM+ 1.5 CSE. If you type "DCOMCNFG" from Windows XP/Windows Server 2003 Start/Run box, you will no longer get the DCOMCNFG UI, but rather the CSE. It displays all the non-configured COM components—both local server and in-process servers. Before Windows XP, all other versions of Windows only displayed in-process COM servers and not local servers.

From its right-click menu, you can promote a legacy component to a fully configured component, as well as disable it at the component level (see the section on Disabling or Pausing an Application). Disabling a legacy component is really the only COM+ property that a legacy component can take advantage of. Because of this limitation, it probably makes sense for you to view a legacy component in this context as a non-configured DCOM component.

When a component configured in the Legacy Components folder is instantiated from an application on the same machine, instantiation occurs is as if it were non-configured. (They really are non-configured, despite the UI that may lead you to think otherwise!) An in-process Legacy Components server runs in the process of its caller, again not spawning a new DLLHOST. If you add a local (EXE) server to this folder, it will run in its own process space. It will activate neither in a DLLHOST hosted process, nor within the process of its activating code.

Activation Context

COM+ 1.0 introduced the oft-misunderstood concept of context, replacing the apartment as in the innermost entity of execution for a component. When porting MTS applications to COM+, occasionally this caused performance issues due to cross-context interface marshaling not present under MTS/NT4. COM+ 1.0 offered a minimum solution to this in the form of a check box to force activation in the caller's context to eliminate marshaling across context boundaries. Using this option severely limits the use of COM+ services for a component. No interception-based services can be used that require their own context, such as transactions, JIT, and concurrency. Unfortunately, the COM+ 1.0 CSE user interface did not enforce this restriction, and this activation option could be selected improperly along with JIT activation. For more information on this option and other MTS to COM+ porting issues, refer to my MSDN article, Preserving Application Performance when Porting from MTS to COM+.

Within the COM+ 1.5 UI shown in Figure 3, the proper relationships among context-related selections are now enforced. If you choose Must be activated in the caller's context or Must be activated in the default context, any CSE settings you have checked that would prohibit this, such as transactions, will be disabled. The first of these activation options works just like it did in COM+ 1.0, but the latter is new for 1.5.

Choosing to activate in the default context in your component does not use any COM+ interception-based services and its activator is always running in the default context of its apartment. This subtly differs from the first option, which always puts you in the context of your caller, which may or may not be the default context. This is a decision you'll have to make by analyzing your application call tree—compatible threading models, who is calling you, whom are you calling, and how frequently these calls occur compared to other call patterns in your application. Visual Studio Analyzer can assist you in this analysis.

Figure 3. Activation options

There is an additional activation option from which to choose: Don't force activation context. This is not really a new activation option, as the COM+ 1.0 runtime always used this implicitly if Must be activated in the caller's context was not selected. It tells COM+ to not force the activated context into either the caller's context or the default context on your behalf. Rather, it makes that decision dynamically depending on what services are configured for the subordinate component and its activator. If using transactions, JIT, or concurrency, this is the option that will be chosen by default and cannot be changed. In the UI, the Enable Just In Time Activation check box is subordinate to this radio button, which further enforces the correct dependency among the services and context activation selected.

Application Recycling

Just like our old, creaky human bodies, the performance of most applications will degrade for various reasons over time. Memory leaks, poor coding practices, bugs, and improper resource management can cause a process to perform poorly, hang, or even crash as time progresses. A common site maintenance action is to regularly recycle (stop and restart) the Web server or a DLLHOST process as a quick fix for known issues and to prevent unknown ones from occurring.

From within the CSE, you can manually recycle an application process on-demand, or use triggers to automate this process. These standard, non-extensible triggers from the UI include time, memory, calls, and activations as shown in Figure 4.

Figure 4. Recycling options

  • Lifetime Limit. Maximum minutes the process can live. This will trigger if none of the other triggers fire beforehand.
  • Memory Limit. If virtual memory exceeds this amount for greater than 1 minute, the trigger fires.
  • Expiration Timeout. Grace period once any trigger fires before shutdown occurs.
  • Call Limit. Maximum number of method calls received before trigger fires.
  • Activation Limit. Maximum number of activations before trigger fires.

Once a trigger event is raised, the Expiration Timeout grace period commences. This is the number of minutes COM+ will wait for clients to complete existing work before recycling occurs. When either the expiration timeout expires or the last client reference is released the process will be recycled. COM+ will always recycle the process when this time period expires, regardless of whether any outstanding references still exist. This covers the situation where a client has stopped using an object reference but has never (properly) released it. A lack of this fall-through timeout would prevent proper application recycling if releasing all outstanding references was the only criteria used.

To avoid complications with existing references during this recycling process, COM+ transparently creates a duplicate of the DLLHOST process associated with an application to service all future object activation requests. The original DLLHOST exists to finish servicing any outstanding reference requirements on the older instances. Once all of the external references to instances in that old process space are released, or the grace timeout period expires, the original DLLHOST is shut down. Through this behavior, application recycling ensures that a client application does not experience a service interruption.

Immediately after a recycling trigger criteria is met, yet before the process is actually shut down, the IComApp2Events::OnAppRecycle2 event is fired. It is raised by the COM+ runtime through the system events publisher (CLSID_ComServiceEvents). Using the standard COM+ event mechanism, you can implement the IComApp2Events interface to subscribe to this and various other system events relating to process termination and loading. In the event-handler routine, a monitoring or logging application can use the data passed in to log or make dynamic lifetime management decisions. For instance, a subscriber client or a monitoring application can release any existing object references it holds in the process about to be shut down. It immediately can then create a new reference to that same class of object in a valid process by calling the appropriate object instantiation API (CoCreateInstance, CreateObject, New, and so on). This prevents the client from calling a proxy that no longer points to a valid instance after the process is recycled.

When using application recycling in conjunction with application pooling, proper lifetime management of synchronization objects and shared resources is paramount. Suppose component instance X (XA) in process A is holding a Win32 mutex synchronization object to a resource (or a file handle) that another instance of X (XB) is waiting on in process B. If process A is recycled and XA does not release that object properly, the instance of XB in process B is hung if waiting in an infinite timeout. However, if XA subscribes to ::OnAppRecycle2, it can ensure that its cleanup code releases any shared resources it holds when shutting down to minimize the chance of this happening.

Recycling occurrences will be spread over a period of time when combining application recycling and application pooling to minimize the effect on clients. Suppose we have X pooled application processes, and the Lifetime Limit is set to ET minutes. The lifetimes of each pooled application process will be (Z* (ET/X) + ET), where Z is the zero-based order of creation. Assuming no other triggers are fired, if we have four processes and ET is 60 minutes, their respective lifetimes would be X0=60 minutes, X1=75 minutes, X2=90 minutes, and X3=105 minutes. COM+ 1.5 will space out the recycling process so that if a process is shut down for any reason, another process will not be shut down until at least ET/# processes minutes have expired. Once this time has passed, the normal shutdown period will be used again.

Here are some issues to be aware of when using application recycling:

  • An application configured to run as a service cannot be recycled. This makes perfect sense, as services typically run throughout the lifetime of a system boot.
  • If used in conjunction with application pooling, application recycling will occur on a per-process basis. Only the particular process whose recycling trigger has been activated will be recycled. This preserves availability by always having an application to service new clients while one is being recycled.
  • Applications that activate as "Library" will inherit the application recycling properties of their server-activated activator application. Just like other characteristics of library packages, such as identity, this is a natural expectation as it shares the process of its activator.
  • Application pooling can sometimes yield unexpected results for library-activated components, as the Call Limit is the total of all calls into all components in that process. Suppose server-activated component S instantiates library-activated component L, and the application for component S has its call limit set to 1000. If component L gets 999 calls, and S gets only 1 call, the process will recycle. Even if almost all of the calls go to component L, the Call Limit setting will be triggered for that process. L may or may not be written to handle recycling and this could cause problems.
  • Even if you have selected the Leave running when idle check box in the Activation tab, application recycling will ignore this setting and shut down once the recycling trigger is met.
  • Once application recycling is configured and an application is started, recycling cannot be disabled for any running instances of the application. Additionally, an application cannot refuse to be recycled once a recycle event has been triggered.
  • You cannot prevent an administrator from recycling the application manually or changing the criteria to do so in the CSE. As with any configuration settings, locking down access to the CSE through setting proper permissions on the roles for the system application can assist in making this more secure.

Disabling or Pausing an Application

There are times when an application may need to be temporarily inaccessible to its users. For instance, a database server may need to be updated or rebooted and data components should not make calls to it during that time. Another CPU or additional memory may be added to the application server, which could affect configuration properties in the COM+ catalog. A new version of a DLL or an updated runtime file that a configured component uses may need to be installed. Perhaps an open logging file related to a specific configured application needs to be closed and archived. Or a constructor string used to connect with a database server at activation time needs to be changed, due to a sudden relocation of the database server. Maybe a bank does not want its application run during off-hours so it can disable it nightly and resume it in the morning. In any case, when a problem is occurring, you may want to debug that process.

The motivations to pause or disable an application are numerous. But while closely related, these two options serve two different functions. For simplicity's sake, let's discuss each separately.

Application Pause

Pausing an application comes in handy when trying to attach a debugger to a process, or to halt it to examine the application's state. To best analyze the process, it is sometimes best to stabilize the process by preventing any new activation requests, or method calls, into that process. Pausing holds the process state relatively stable, potentially allowing you to more easily isolate the error. You can dump it (through AutoDump+, Userdump, Dr. Watson, or the CSE), analyze PerfMon counters, or attach a process-level debugger during that paused state.

If pooling an application, you can independently pause one or more of its DLLHOST processes independently of each other. Pausing is process-specific and is paused from the Running Processes folder context menu. A paused process counts towards the maximum processes allowed based upon the application pooling configuration entry. New requests continue flowing into other non-paused processes in the pool to service existing clients, while the paused process is left alone to be analyzed and debugged. This isolation can be very useful when users say that the problem happens only on the production machine, as it allows both debugging and the application's execution itself to occur in parallel.

If the maximum processes in a pooled application are all paused, and an activation request comes in, that request will fail. Clients that hold existing references to objects in a paused process will not be processed and will return the hResult E_APP_PAUSED.

A paused process will no longer be in a paused state following a server reboot. If a paused process is marked for recycling (one of its recycling criteria has been triggered), it will not be shut down until its active lifetime is resumed again from the context menu.

Application Disable

Disabling an application is particularly useful when installing new software to avoid application errors or service interruption during the update process. By disabling an application instead of doing a shutdown during an update, new users cannot instantiate any of its components nor run them until the complete installation process finishes and the application is manually enabled again. Without this option, the administrator has to manually shut down the application but still cannot prevent the process from being restarted on the next activation request (like COM+ failfast does). Disabling is also very handy when, for whatever compelling reason, you want to prevent any further component activations for that application or a particular component.

Disabling is handled through the context menu both at the application level and the individual component level. It is not done through the Running Processes folder as is pausing. When done at the application level, it affects both non-running and running processes. If an application using application pooling is disabled, all its existing processes are disabled. Of course, if an application using application pooling is not running and is disabled, no future instances of its processes will be created until it is enabled again.

If disabling is done at the component level, any existing, non-paused processes with outstanding references to instances of the just-disabled component will continue to service requests for that class of object. This differs from pausing where an existing client cannot make calls into that process with an outstanding reference. However, no new instantiations of the component will occur. New activation requests will not be sent to a specific paused process, or to a disabled application or component. A library-activated application cannot be paused—only a server-activated application can. Both server and library applications, as well as individual components, can be disabled.

Once disabled, this status persists across reboots of the server in the COM+ catalog. This is useful if we do not want clients to start using an object right after a reboot, for instance, were we to need to make some configuration change for it to work correctly after reboot.

Applications as NT Services

In addition to running out-of-process or in-process, COM+ 1.5 server-activated applications have the alternative of activating as a service. The great news is that you do not need to make any modifications to the component's source code. Merely select the Run Application as NT Service check box in the CSE to activate as a service. This option can provide useful advantages, such as more control over the lifetime of an instance and multiple process identity options.

Better Component Lifetime Management

Executing as a service gives you more control over an application's lifetime. In a clustered server environment, it is preferable to ensure the application is up and running when the system boots, as opposed to waiting for a user to request it (during failover) and incurring all the overhead of instantiating the first instance. This increases availability and response time during a failover situation.

More Flexible Identity Selections

When configured to run as a service, an application has additional options with respect to process identity. A COM+ 1.5 service application can run as a specific user account, just as was done under COM+ 1.0. For simplicity during testing, the Interactive User is still very handy. However, this account is not for deployment, and an application cannot be run as a service with this identity. If you are running as a service and want more control over identity, you can take advantage of the special built-in Windows XP/Windows Server 2003 accounts. Let's compare the three of these accounts to help you decide which is best for your situation.

Local System account

With Windows NT 4.0, the Local System account scope was limited to the local machine for which it was defined. This restriction no longer exists under Windows XP/Windows Server 2003 (and Windows 2000). When a Windows XP/Windows Server 2003 process running as Local System accesses network resources, it does so using the computer's domain identity, not as a specific (Local System) user. If your machine name is myXPservermachine and is a member of the myBigDomain domain, the logon event on the server will show the user as myBigDomain\myXPserverMachine$. If using COM+ roles, you can use this myBigDomain\myXPserverMachine$ account just as if it were a normal user. The access token created during authentication includes the SID for the local computer's domain account, plus the SIDs for security groups of which that computer is a member.

For instance, by default all machines in a domain (with the exception of domain controllers, RAS servers, and Internet Authentication servers) are members of the global Domain Computers group. A machine's (name) account is also a member of the Authenticated Users group composed of all machines and users whose identities have been authenticated. This group excludes the Guest account and the Anonymous User from IIS. The Authenticated Users group can be composed of authenticated security principals from any trusted domain, not just the current domain. You can make use of this knowledge and these groups if setting up the corresponding COM+ roles for this machine identity to call into.

The Local System account has no network security credentials and connects using a null session when on the network. Its effectiveness is thus diminished when using pipes or accessing shares. If the need arises for this account to share entities with other applications like synchronization objects, for instance, it can do so but must exercise care. It needs to ensure security is not compromised and the proper ACL permissions are granted.

The Local System account provides very powerful local privileges. A service running under this account has full access to the local machine. (It is a hidden member of the local Administrators group.) For example, a service running under this account has the ability to modify the registry, it has full file access rights, and can interact with the desktop. A service running under a specific, non-administrator account typically does not have these capabilities. Most services do not need such a high privilege level. For the sake of security, if a service doesn't require this high permission level, and it is not an interactive service, consider using either the Local Service or Network Service accounts.

When checking the Run as NT Service box, there is an option to Setup New Service that allows you to list dependencies from which you can control the order of the startup of services. There is an option to decide upon the type of error to be raised if the service fails to start when boot up occurs. If an application is running as a service, it cannot take advantage of application pooling or recycling. The name of this account is .\LocalSystem, and it does not have a password.

Local Service account

The Local Service account has more restricted local access than the Local System account. Local Service can be used in a very restricted manner when accessing network resources. It does not have authenticated access and runs under the anonymous user account. Only ACLs for resources that allow access to this account or the Everyone group can be accessed as Local Service.

It is important to understand that this anonymous logon account is not the same identity used for anonymous IIS access. The token used by IIS for anonymous users is an actual account (by default, IUSR_machinename) with a password and a valid security principal. IIS allows unknown users to run under this principal and controls permissions to Web machine resources through NTFS and ACLs. Since this account uses a password, IIS logs the account on when the service starts. As a result, the IIS anonymous user is a member of the Authenticated Users group and includes authenticated security principals from any trusted domain, not merely the current domain. Conversely, the Local Service anonymous logon is not a member of the Authenticated Users group. This means that unless a server has opened itself up and given permissions for the desired resources to anyone (through the Guest account with no password), the Local Service account cannot get access to that machine, let alone the resource it requires. In reality, this will not happen, so forget about using this account on the network. The name of this account is NT AUTHORITY\LocalService, and it does not have a password.

Network Service account

The Network Service account is similar to the Local Service account in that it possesses minimal local access permissions. But unlike the Local Service account, it accesses network resources using the credentials of the machine account (just like the Local System account). By default, the machine's account token contains SIDs for the Everyone and Authenticated Users groups. In general, you can assume that a service running as Network Service has access to resources whose ACLs allow access to that account—Everyone, or Authenticated Users. The name of the account is NT AUTHORITY\NetworkService, and it does not have a password.

You will want to base your identity decision upon your requirements for security versus privilege—a double-edged sword. For maximum local privileges but minimized security, Local System is a good choice. In Windows Server 2003, you now have the option of running any COM+ server-activated application under either the Network Service or Local Service accounts. (This differs from Windows XP in that you can only run with these additional identities if activating as a service). Both have the same limited degree of permission to resources, as do members of the Users group (made up primarily of interactive users, authenticated users, and local user accounts). If Local System is too powerful, Local Service limits local permissions to minimize system damage in case a service running under that account is compromised. This includes using printers, shutting down or locking the machine, or installing certain types of applications.

Table 1. Comparison of COM+ application service identities

Service Identity Interact w/Desktop? Network Representation Network

Permissions

Local Permissions Has Password?
Interactive User Yes As logged-in user As logged-in user rights As logged-in user rights Yes
This User No As specified user As specified user rights As specified user rights Yes
Local System Yes Domain\machine$ Limited Full control No
Local Service No Anonymous logon None Limited rights No
Network Service No Domain\machine$ Limited Limited rights No

There exists a check box to allow a service to interact with the desktop. Be careful when displaying a message box for a service, as its machine may be a standalone server, and when displayed may hang the service if not cleared. If you install an application in COM+ to run as a service, and conversely remove it, COM+ will remove it from the Services applet automatically for you. Figure 5 below shows the various identity options for a service.

Figure 5. COM+ service identity options

Process Dump

On the Solution Integration Engineering (SIE) support team, process dumps play a key role in our being able to diagnose complex and multi-technology problems in a distributed application. Within the intermixed and potentially fragile environment of a distributed application, numerous Microsoft, custom, and third-party applications vie for shared hardware and software resources. These highly contended resources include CPU cycles, memory, threads, synchronization primitives, locks, database connections, file handles, registry keys, STA message pump, TCP/IP sockets, and so on. Under load, a misbehaving application can significantly slow down or bring a related process chain to a standstill (hang). Alternatively, buggy code could cause an (unhandled) exception to be thrown, resulting in a process shutdown event to occur (crash). In cases like this, especially in a release environment, typically the only way to track down and isolate the offending module is through a process dump.

COM+ allows you to get a dump of a running DLLHOST process easily from within CSE. At the application properties level, the Dump tab allows you to configure the dumping of a process' memory space to a file. There are two ways to accomplish this:

  1. Non-invasively, by right-clicking on the application in the Running Process folder, and then clicking Dump. This will immediately take a dump of that application process, and the process itself (lifetime) is unaffected. This is typically done for a hung system.
  2. Invasively, by configuring the Enable Process Image Dump option from the Dump tab. When an unhandled (second chance) exception of any type is raised within a process, COM+ catches this and dumps the process before it is terminated. This dump feature itself does not cause a COM+ process to terminate. Rather, this termination occurs as a result of COM+ failfast, a safety measure to shut down a COM+ process when an unhandled exception (like a C0000005 access violation) is raised. Its purpose is to try to minimize any further problems or data corruption in that process space. This option is used to dump a system that is crashing.

COM+ uses the appropriate system APIs to dump a process. It does not rely on any external dump utilities being installed on your system, such as the Microsoft® Debugging Tools for Windows or USERDUMP.EXE. COM+ does not do dump file management and tries to avoid overwriting older dump files whenever possible. You can specify the maximum number of dump images in CSE—it will start overwriting when it gets to that limit. Unless specified otherwise, the dump files will be written to the %systemroot%\system32\com\dmp folder as {application GUID}_YYYY_MM_DD_HH_MM_SS.dmp.

Now don't get all excited at this new feature and run out to cancel your Microsoft support contract. Understanding the binary output of a dump file is not a trivial task. Debugging at the process level is not for the faint hearted and requires experienced advanced skills and files to make productive use of this information. Additionally, symbols (.pdb or .dbg files) for the Microsoft modules, as well as your code, are needed. Microsoft has created a public symbol server that contains information on how to make use of these public symbols for those who want to take a crack at it. However, these symbols are not as functional as those used by a Microsoft Support Professional, and may not give you all the information you need in solving the issue yourself. It may be the best choice to FTP your dumps and custom symbols to a Microsoft Support Professional and leave the Zen art of debugging to them.

In case you are interested, however, a great link to learn about process-level debugging is at https://www.microsoft.com/ddk/debugging. Here you can find out how to access and use (both Microsoft and your own) symbols and obtain the latest version of various debugging tools. Even if you don't intend on debugging your own dumps, you may want to install the latest debugging bits from this site, just to install the AutoDump+ dump utility script. AutoDump+ is not part of COM+ and is extremely powerful and versatile. It allows you to dump on many types of first- and second-chance exceptions in crash mode, and can be customized easily (it's script code) for your specific need.

AutoDump+ allows simultaneous dumping of all COM+ and IIS processes at the exact same instant. This is extremely useful in tracking a problem across processes. When using application pooling and the COM+ process dump functionality, a drawback is that you can only individually dump one process at a time. If you need to dump all processes related to one application simultaneously, you cannot do this from within CSE—you can only dump multiple processes sequentially. This may hinder your isolation efforts.

Another difference between AutoDump+ and the COM+ dump capability: When monitoring a process to dump on an exception, COM+ dumps do not occur on handled first-chance exceptions. They are only triggered by second-chance exceptions, and do not differentiate between the types of unhandled exceptions that trigger a dump. AutoDump+ is capable of making this distinction, dumping mini or full dumps on first-chance exceptions, and creating a log and a full dump on second-chance exceptions. There are many AutoDump+ options and benefits. Refer to Microsoft Knowledge Base article Q286350, HOWTO: Use Autodump+ to Troubleshoot "Hangs" and "Crashes" for an in-depth description of how to use this very versatile tool.

You can't use both AutoDump+ and COM+ process dump functionality together if trying to obtain a crash dump. Since the COM+ runtime catches the second-chance exception, dumps, and terminates the process, the CDB debugger (automated by the AutoDump+ script code) will never get a chance to handle the second-chance exception to dump the process. In this case, all you will get is the output from the COM+ process dump. In hang mode, you can run them both at different times, since it is taking a non-invasive snapshot of the process.

For those of you who are serious debuggers, you may also want to look at the Advanced tab in at the application property page level. There is a groupbox entitled Debugging that will allow you to launch the COM+ application in the debugger. If Visual Studio is installed, this will be the default, but you can change it to whatever you want. For instance, to install WINDBG.EXE as the default just-in-time debugger, enter windbg –i from a command prompt. Within this text box, you can enter any command-line options that need to be specified for the debugger, such as any exception types you want to break on. This is very helpful when there are problems in the early stages of an application's lifetime and you need to debug it while it is being initialized.

Component Aliasing

COM+ 1.0 directly supports a component's CLSID existing in only one application on a machine. You need to stretch your creativity for a component to exist in more than one application. For instance, application-sharing functionality can occur at the source level by maintaining more than one IDL file for each component. While each binary would all have the same interface signature, their CLSIDs would differ. Clients would pass in different CLSIDs to instantiate specific instances as their application and user needs dictate. Of course this means more involved maintenance management due to multiple source and binary trees.

COM+ 1.5 still has this multi-application restriction, but provides a way to circumvent this restriction by aliasing a component. Multiple representations of the same binary image can exist under different names (aliases) on the same machine. Component aliasing allows sharing at the binary level as determined by COM+. You can take the identical binary object and assign many aliases to it as required within CSE. There is no longer a need to change any source file.

Let's discuss how this could be useful. Suppose we have a component that opens up a TCP/IP connection to a specified application server when instantiated. The IP address or DNS name is passed into the object construct string as needed by a certain client in IObjectConstruct::Construct. In COM+ 1.0 we would modify source to compile as many different CLSIDs as we need connections for that class of component. Under COM+ 1.5, we compile only one instance, and then use the CSE aliasing functionality to assign a new ProgID and CLSID for as many aliases (connections) as we will need for that object. These different CLSIDs are distributed to the appropriate client applications that need to connect to a certain server. This allows us to create one identical unit of logic that can assume many forms, based upon its configuration and values passed to it at run time.

To accomplish component aliasing, right-click on the component in CSE and click Aliasing from the popup menu. This brings up the following dialog box, which automatically fills suggested and unique entries in the New ProgID and New CLSID fields. You can accept these as the default or enter your own naming scheme (typically you won't need to change the New CLSID field). Select the COM+ application in which the aliased component will live. Once you hit OK, a new component will be added to the COM+ application of its original component. This newly aliased component can then again be aliased with its output component being a peer to both itself and its originator. Figure 6 below shows the alias dialog box.

Figure 6. Aliasing a component

Public and Private Components

Some components are security sensitive or inherently private (as a helper component) in nature. For whatever reason, these components may need to be accessible only to a specific application. It may make good sense to isolate them from other client applications or other code on that system. Since a COM+ 1.0 component is public, it can be created by anyone who has permission—through a COM+ role—to do so. Roles do not meet every need for access control in all cases. And if role-based security is not used, it makes it even harder to control its usage.

Previously, to protect a component, slick developers would enable a component's security but not assign any roles on it. Since security is not checked within inter-package calls, only a component from within that same application could instantiate it. COM+ 1.5 provides a more straightforward mechanism to ensure a component is instantiated only by other components within its same application. To accomplish this, within the Activation tab of a component's Properties, select Mark component as private to application. Refer to the figure in the Activation Context section, which shows the UI to select this option.

A handy benefit of this is that if a developer changes an interface in a private component, since only his code will be calling it, no clients will be negatively affected.

Low-Memory Activation Gates

A very useful reliability enhancement of COM+ 1.5 is the ability to prevent the instantiation of an object instance, or the loading of a DLLHOST process, if the percentage of virtual memory available falls below a fixed threshold. This prevents potential problems associated with detecting and properly handling out-of-memory conditions later in the process lifetime—such as in the middle of a distributed transaction when many dependent resources are already locked. This threshold does not prevent a failure that would normally occur anyway in a low memory condition. What it does do is allow logic and error-handling code to handle this situation more gracefully.

If we are going to run into an out-of-memory condition, it's in everyone's best interest to know this upfront and avoid unnecessary and often problematic processing of that condition. Low memory activation gates are like driving cross country from New York to Los Angeles, and finding out two-thirds of the way there in the middle of a scorching desert in New Mexico that you are out of gas, food, and money. You have to desperately scramble to somehow get back to civilization, get replenished, and maybe try again later. Wouldn't you rather come to that realization at an air-conditioned hotel in Texas the day before, where you could more gracefully handle the situation with an ATM machine? When you left on your trip from New York, you obviously did not know you would run out of money, gas, and food at some later point in time. But realizing it in a Texas hotel room sure makes it a lot easier to recover than in 120-degree desert temperatures. Despite the advances in technology, you can't get money from a cactus with your ATM card in the desert—not yet at least.

The virtual RAM thresholds that trigger the memory gates are hard-coded at 95 percent for loading of DLLHOST and 90 percent for the instantiation of a configured COM+ component. Since this threshold is based upon percentage rather than amount of virtual memory, using the /3GB memory switch just gives you a higher number of bytes (but same percentage) to work through before the gate is triggered.

Unfortunately, error- or exception-handler code for an out-of-memory situation is rarely hit during development and testing. Since it is infrequently executed, the error-handler code may have bugs that may never be uncovered unless this uncommon code path is executed. With low-memory gates, the buggy error-handler code does not have to be responsible.

Rather then finding out in the middle of the desert and scrambling to get back to the Texas hotel, E_OUTOFMEMORY is returned when instantiating an object or a DLLHOST process by the Service Control Manager (SCM). This prevents instances from being loaded into dangerously low-memory situations. It is better to plan for this situation in code than to have an exception thrown unexpectedly in the middle of processing a data transaction, and hope it will be handled properly and not crash the system. Additionally, if the relatively untested application error-handling code contains bugs, the results of this error can be catastrophic for the application.

This goes without saying, but only configured COM+ components can take advantage of this feature. COM+ 1.5 places memory validation gates in front of every configured application and its constituent components. You do not have a choice whether to use this feature or not. An internal worker thread regularly compares the available virtual RAM with the threshold values and makes the decision whether or not to allow creation. If instantiating a non-configured component, these gates will not be used. Non-configured components do not come into play when calculating this threshold percentage. Realize that memory gates have nothing to do with limiting explicit virtual memory allocation done programmatically within a program.

Process Initialization

A common need among those designing COM+ components is more control over resource management during the initialization and shutdown of its hosting process. Specifically, this relates to process-wide resources that the first instance of a class in that process creates, all subsequent instances use, and the last instance releases.

Language-specific lifetime management features (such as a C++, C#, or Microsoft® Visual Basic® .NET constructor/destructor or Visual Basic Class_Initialize/Class_Terminate methods) simply don't cover all scenarios. These mechanisms typically fall short at the component level, but more so at the process level, for a configured object. Primarily this is due to the fact that during these initialization functions of a configured object instance, the MTS/COM+ context associated with that object has not yet been created. When these initialization functions are invoked, the code cannot access any MTS/COM+ services through the object context. Note that Class_Initialize now has access to an object context as of Windows 2000. For more information, see Microsoft Knowledge Base article 278501: PRB: Visual Basic MTS/COM+ Components Should Not Implement Class_Initialize.

To counter the need to access the object context during initialization, MTS introduced the optional IObjectControl interface with its Activate and DeActivate methods, which have access to context. Activate is invoked each time an instance is activated, while Deactivate is invoked correspondingly when a component is deactivated. If used for allocation and cleanup of that instance's resources only, they typically serve the purpose.

Unfortunately, IObjectControl does not offer direct, process-wide resource management support. In an attempt to get around this limitation, creative programmers have despondently discovered that using IObjectControl for management of process-wide resources in the enterprise space is not straightforward or reliable. This limited means of supervising the lifetime of process-wide resources typically leads to resource leaks, unexpected process termination due to thrown exceptions, or a hung system. What is needed is a standard way to manage process-level resource management operations.

COM+ 1.5 introduces the IProcessInitializer interface for enhanced lifetime management of process-wide resources that do not have direct ties to COM+, such as network connections, shared memory, and so on. This is an optional interface that should be implemented only by those components within server-activated applications that want to participate in process-wide resource management. This interface will not be called for library-activated applications.

  • IProcessInitializer::Startup. Takes one parameter and is invoked when a component is starting up. (In Windows XP and Windows Server 2003, this parameter can be used to extend the default 90-second window. Under Windows 2000 Sp2+ it is always NULL). It is important not to take too much time initializing resources, as the operating system will kill a DLLHOST process if its initialization is not finished within the default ninety (90) seconds. When this is called, no COM+ components have yet been instantiated and there are no contexts from COM+ with which to work. Hence initialization through IProcessInitializer should be limited to resources that don't necessitate a COM+ context.
  • IProcessInitializer::Shutdown. Takes no parameters and is called when a component is shutting down. In the case of COM+ failfast, or abnormal program termination, the Shutdown method will not be invoked.

'''When the very first instantiation request comes in for any component in a server-activated application, the COM+ infrastructure creates the DLLHOST process and loads the DLL file that houses the requested object. At the same time, the COM+ runtime queries the COM+ catalog for all configured components in that application that implement the InitializesServiceProcess attribute. (When a component that implements IProcessInitializer is registered, this property is set automatically.) Every component in that application that has this attribute set will be instantiated, its constructors called, and its IProcessInitializer::Startup method called. Once Startup completes, the normal COM+ activation process continues with IObjectConstruct::Construct and IObjectControl::Activate being called respectively, if implemented. Each object's IProcessInitializer interface pointer is saved for when the process is shutting down (discussed shortly). COM+ creates the instance and is the only client to call methods on that object. It thus manages its lifetime so the object will always be around when the process goes to properly shut down.

When the process is being shut down, COM+ accesses the IProcessInitializer pointer it obtained when the process was starting up. Using this pointer, the IProcessInitializer::Shutdown method is invoked for the last instance of any class that implements IProcessInitializer.

When a component with IProcessInitializer is started or terminated, its exported DllMain function is never called for PROCESS_ATTACH or PROCESS_DETACH events. If you had previous code that did initialization or cleanup here, you should move it to the Shutdown and Startup methods.

I bet you're thinking, "If resource allocation is done in multiple components, all loaded at the same time, isn't it a formula for problems?" Absolutely. You can't pre-determine the order in which each component's IProcessInitializer::Startup methods will be invoked. It is permissible to have more than one component in a server application that implements IProcessInitializer. Unfortunately, this increases the logic required within its methods to coordinate management of shared process-wide resources with other components in that same application that also implement this interface. Therefore, creating one component per application whose sole purpose in life is to allocate/initialize and de-allocate a specific resource is the best way to go with IProcessInitializer. If multiple orthogonal resources are used, you probably will want one component implementing IProcessInitializer for each resource.

This feature is available on Windows 2000 under COM+ 1.0 with COM+ hotfix rollup 14 or higher. If running Windows 2000 and COM+ 1.0, simply implementing IProcessInitializer and registering the component will not enable the initialization feature for a component. A COM+ rollup does not automatically register the COMSVCS.DLL, and thus IProcessInitializer interface is not a recognized interface on that particular system. You will have to manually register COMSVCS.DLL through REGSVR32.EXE in order for the IProcessInitializer interface to be known. Refer to Microsoft Knowledge Base articles Q319776: IProcessInitializer Interface Does Not Register with Microsoft COM+ Rollups and Q303890: INFO: Application Startup and Shutdown Events Are Now Available in COM+ for more information on how to accomplish this. This registration issue is fixed on Windows 2000 SP3.

Services Without Components

When you take a close look at the architecture of the .NET Framework, you may arrive at the realization that its powerful functionality does not always have to be packaged into a configured COM+ component. The .NET Framework ServicedComponent class is used for configured, managed, non-COM classes to make use of COM+ services on Windows 2000. But on Windows XP/Windows Server 2003, both unmanaged and managed code can make use of COM+ 1.5 services without having it componentized or configured in COM+. It does this through Services Without Components (SWC). By calling a few APIs, sections of both managed and unmanaged code execute as if housed within a configured COM+ component. Managed code does not need to inherit from ServicedComponent, or be configured, to do so. SWC will be used primarily by C++ developers who understand COM+ and contexts very well, and want to squeeze every bit of performance out of their application.

Let's look at how SWC could assist in an enterprise application. A widespread conflict in component development occurs when methods of a configured data access component have differing transactional requirements. For instance, a method that reads the database does not need to be transactional, but one that updates data does. Before Enterprise Services and SWC, the logical choice was to split the component into both a transactional and non-transactional component, then mark their transactional attributes accordingly. Alternatively, you could create a transactional wrapper component that is called from the client when transactional work is to be done. The wrapper is marked as transaction "Required" and forwards calls to the original component marked as "Supports" (a transaction). For non-transactional work, you could call the original Supports component directly and avoid the transactional wrapper component. However, this requires more than one component, duplication of code, and changes to the client code.

Thanks to SWC, a dual component architecture is no longer needed in cases like this. From within code we are now able to use a COM+ service only when we need it. This avoids the overhead of using a COM+ service unnecessarily in code within the same class that does not require that service. SWC also saves the overhead of the need to be a configured COM+ component for code that rarely requires the use of Component Services. In fact, componentization of code is no longer required to use Component Services. SWC permits development and execution environments that are not component-based, such as script code, to make use of COM+ 1.5 features in a more efficient manner than a configured component. Services can now be applied wherever they are needed, regardless of how they are packaged. You merely block off sections of code with the SWC APIs (discussed shortly) and use only the specific component services you need. Before we look at these APIs, let's take a look at the CServiceConfig class, a key part of SWC.

CServiceConfig Class

The COM+ CServiceConfig class plays an important role in SWC. It defines and activates services for the service domain entered when using the SWC APIs. You create an instance of CServiceConfig through CoCreateInstance with a CLSID of CLSID_CServiceConfig. (Its ProgID is COMSVCS.CServiceConfig.) The instance returned from this call aggregates the Free Threaded Marshaler (FTM) so it can be called directly from any apartment. Once the object is instantiated, you can QueryInterface for one of numerous interfaces with which to configure the desired Component Services. You can configure more than one service with the same CServiceConfig instance. These interfaces are all named like IServiceXXXConfig, where XXX is the service (Transaction, Synchronization, ThreadPool, IISIntrinsics, and so on) to be configured. Most of them contain methods named similarly as well, such as ConfigureXXX, where XXX is the service to be configured (that is, IServiceSynchronizationConfig::ConfigureSynchronization).

As we'll see shortly, SWC is just that—using services without component instances. This is a new paradigm in that there are no objects associated with the contexts we are creating, using, and deleting. Certain object-based services, such as JIT, component aliasing, object pooling, and so on, do not make sense with SWC and are reserved solely for configured components.

Here is code to configure CServiceConfig to use COM+ transactions. We'll examine the paradigm to use this object shortly.

// Create a CServiceConfig object.
hr = CoCreateInstance(CLSID_CServiceConfig, NULL, 
CLSCTX_INPROC_SERVER,
  IID_IUnknown,  void**)&pUnknownCSC);

// Query for the IServiceInheritanceConfig interface.
hr=pUnknownCSC->QueryInterface(
IID_IServiceInheritanceConfig,   
(void**)&pInheritanceConfig);

// Inherit the current context before using transactions.
hr = pInheritanceConfig->ContainingContextTreatment(CSC_Inherit);

// Query for the IServiceTransactionConfig interface.
hr = pUnknownCSC->QueryInterface(
IID_IServiceTransactionConfig,   
(void**)&pTransactionConfig);

// Configure transactions to always create a new one.
hr = pTransactionConfig->ConfigureTransaction(CSC_NewTransaction);

// Set the isolation level of the transactions to ReadCommitted.
hr = pTransactionConfig

// Set the transaction time-out to 1 minute.
hr = pTransactionConfig->TransactionTimeout(60);

Service APIs

Now that we have configured the service, let's take a look at the ways to use the CServiceConfig instance. There are two primary modes from which to use the SWC service APIs—inline and batch modes. We define inline mode as using CoEnter/LeaveServiceDomain APIs in a purely synchronous manner. Batch mode uses CoCreateActivity and offers both synchronous and asynchronous processing paradigms.

Inline mode

The CoEnterServiceDomain and CoLeaveServiceDomain API pair wrap code that requires specific Component Services for a moment of their execution time. CoEnterServiceDomain creates a new context with whatever services are specified in CServiceConfig and switches the calling thread of execution into that context with appropriate context transitions.

These APIs can be called from non-configured as well as from configured code. An example of the latter is transactional method calls. The transaction attribute lives at the class level, and so all methods of a class will use transactions if it is used. A configured component that wishes to use transactions only on certain methods can use SWC APIs to wrap only the code that needs to use a transaction and not unnecessarily incur transaction overhead for all the other methods. You can even define the granularity to specific logic paths within a method, meaning the method may or may not be transactional depending upon its execution path.

The code that calls CoEnterServiceDomain gets a new context that is not associated with any specific class instance or any other context. This context is a legitimate COM+ context that you can access programmatically through GetObjectContext from your code. Proxies are where interception occurs and are specific to a context. Interception ensures that context flows correctly from caller to called component if both are not sharing the same context. CoEnterServiceDomain creates a new context and proxy for the context switch to occur. Specifically, the CServiceConfig object passed to CoEnterServiceDomain takes care of context switching and interception. If a method call is made to another component within the API block, anything that is needed within the new context must be explicitly made accessible to it. For instance, a "this" pointer, or an interface pointer won't work in the new context. Since there are two different proxies, these entities will not be marshaled and thus cannot be passed as method parameters. Rather, they can be made accessible to the new context through the Global Interface Table before calling CoLeaveServiceDomain. A better alternative is to never marshal interface pointers in and out of a context, which is typically a good idea anyway in distributed component-based systems.

During CServiceConfig initialization, the apartment type within which the new context will be bound is specified. If this is compatible with the apartment of the code calling the CoEnterServiceDomain API, the code bound by these calls executes in the same apartment (and thread) of its caller, but within its own context. This means cross-context marshaling will occur through lightweight marshaling (interface parameters marshaled only). If a non-compatible apartment type (for example, STA apartment calling to MTA context) is specified, the call will also cross apartment boundaries using traditional and expensive cross-apartment marshaling. This is not an issue as long as you are careful to understand the apartment you are calling from and the threading model of the apartment you are calling into through CServiceConfig. Make sure the apartment and threading models are compatible, which really goes without saying.

Another case to be aware of is the API-wrapped code block creating and calling a method on class X that is non-configured (it's just a normal .NET managed, or an unmanaged class). If the apartment types are compatible, class X will execute under the context of its caller, which is what a non-configured component does. If apartments are incompatible, class X executes under the default context of that process, meaning a cross-context marshaling situation. Whether configured or non-configured, this apartment incompatibility could possibly become a performance drain on your application, especially based upon the types and numbers of parameters being marshaled. As a component developer, you most likely should write components that can run in "Both" the MTA and STA to handle these situations.

When finished using the services, CoLeaveServiceDomain is called to exit and delete that context. If the context was transactional, the code between these two APIs must access the context before exiting and vote on its outcome through IContextState::SetMyTransactionVote. COM+ will notify the resource managers of the outcome of the transactional work when the object is deactivated, just as it is done declaratively.

Here is an example of how to use these APIs to access component services.

// A CServiceConfig object was created  previously  via CoCreaetInstance and
// configured properly for this cient
// hr = CoCreateInstance(CLSID_CServiceConfig, NULL, CLSCTX_INPROC_SERVER, 
//   IID_IUnknown, (void**)&pUnknownCSC);

// Enter the Service Domain.
hr = CoEnterServiceDomain(pUnknownCSC);

// Do the work that uses COM+ services here.
DoMyWork();

// Leave the Service Domain.
CoLeaveServiceDomain(NULL);

Batch mode

The CoCreateActivity API is the less popular and more complex alternative mechanism for using SWC. Don't get confused by its name—this call creates a COM+ context and not an activity. (The newly created COM+ context may or may not create a COM+ activity. In the CSE and various documentation, the terms activity, synchronization, and concurrency all refer to the same entity—a logical thread of execution.) It is within this context that batch work requiring Component Services executes outside of a configured COM+ object. CoCreateActivity creates and returns an IServiceActivity interface pointer. All Component Services work is submitted in batch either synchronously or asynchronously through the two IServiceActivity methods—SynchronousCall or AsynchronousCall.

Both methods take a pointer to an IServiceCall interface. The actual batch work to be submitted lives within the implementation of the IServiceCall::OnCall method within a component that implements this interface. Asynchronous work is no faster than synchronous as they both are done through OnCall. Asynchronous calls increase programming complexity primarily to deal with error handling. Asynchronous mode is "fire and forget" type work, where you make the call and go on. SWC provides no automated mechanism for the main thread to be notified when the auxiliary tread's work is finished. If you need this notification to occur, you will need to write custom notification code for the thread doing the work asynchronously to notify the main thread that its job is done. Use the synchronous CoCreateActivity mechanism if at all possible, unless a unique situation dictates that it be used asynchronously.

Depending upon the threading model specified when the CServiceConfig-derived object is instantiated, the batch work submitted will run in either an STA or MTA.

Here is code to use the synchronous CoCreateActivity mechanism using the same CServiceConfig object configured above. You could do this work asynchronously by using IServerActivity::AsynchronousCall.

// Create the activity for our services.
hr = CoCreateActivity(pUnknownCSC, IID_IServiceActivity, 
  (void**)&pActivity);

// Do the batch work synchronously and pass 
// in a pointer to an IServiceCall interface. 
// The batch work is implemented in pServiceCall->OnCall().
hr = pActivity->SynchronousCall(pServiceCall);

The pServiceCall parameter is a pointer to an IServiceCall interface, which you implement in your .NET Framework client code to receive the asynchronous callback. It contains only one method, OnCall.

Which Mode Should I Use?

Most developers will choose inline mode due to its simplified logic and straightforward programming implementation. Batch mode is not nearly as commonly used as inline. It is useful primarily for those building a custom component-hosting environment where calls would be dispatched to the COM+ thread pool. For example, Microsoft® ASP.NET already has its own thread pool, which is implemented using COM+ threads. If you dispatch work from that thread pool to the COM+ thread pool in the same process through batch mode, you really don't gain anything. On the other hand, if you were going to build your own equivalent of ASP.NET, and you didn't want to implement your own thread pool, you could use batch mode to dispatch to the COM+ thread pool.

Now before you pull all your component code out of COM+ in favor of inline code, realize that the primary services for which SWC makes sense are managing transactions and synchronization. As mentioned above, other services, such as loosely coupled events, queued components, object pooling, and so on, all require actual instantiated objects and cannot use SWC. Remember, this is Services Without Components so you have to expand your traditional viewpoint of COM+ slightly.

Any work done in OnCall must be thread-safe when access to the context is not using COM+ synchronization, which occurs if you call IServiceSynchronization:: ConfigureSynchronization for an IServiceActivity instance. Synchronization is not required, but just be sure that you write your code to handle potentially multiple and concurrent calls into IServiceCall::OnCall if not using this service.

Web Services

One of the primary goals of the .NET Framework is exposing pre-packaged functionality to consumers as a Web service. For example, medical companies could expose diagnostic feedback services to companies whose applications solely gather and assemble the data but don't contain the complex and specialized code to produce the diagnosis. Surprisingly, Web services technology is not one developed by Microsoft. Rather, it is implemented using industry-wide standards of HTTP, XML, and SOAP. This translates into non-Windows (non-DCOM) clients accessing Web services on Microsoft platforms.

Under COM+ 1.0, non-Windows clients had no way to access the functionality of COM components. Even those on remote Windows machines had to open a specified port range to allow DCOM through a firewall, which meant additional configuration changes. Another drawback of DCOM is that by opening a range, you can cause conflicts with other applications. This may also relax your firewall security policy by opening up a lot of ports. (Alternatively, you can explicitly map an application to a specific DCOM port. See Microsoft Knowledge Base Article Q312960: Cannot Set Fixed Endpoint for a COM+ Application). Either option results in work done on the server to configure the ports properly.

But don't dismiss DCOM just yet. It has many positive features that are not available in any other protocol and is still the most powerful way to accomplish RPC to COM components. It is still the preferred way to access configured components from Windows clients. This leads to confusion among developers about the use of Microsoft® .NET Remoting, ServicedComponent classes, COM+ Web services, .NET Web services, and DCOM.

We'll discuss each of these and how they compare to DCOM so you can make your decision. .NET Remoting is evolving to provide new features, just as DCOM did its early days. As yet it lacks security when self-hosting, but keep in mind that if hosting the remoting session in IIS, you can take advantage of IIS inherent authentication, authorization, roles, and so on.

Suppose you can't use DCOM for whatever reason. What other options are there? For unmanaged code, you could use the SOAP toolkit and port 80. A better option may be to use COM+ (1.5) Web services. When compared to DCOM, it gives up a bit of performance, but gets a huge increase in functionality and flexibility—the main purpose behind SOAP anyway. COM+ Web services is primarily focused on cross platform and non-Windows operating systems on the client accessing configured COM+ components. But even on Windows clients, COM+ 1.5 Web services can be used as an alternative remote calls mechanism to DCOM/RPC. It adds a Microsoft-specific layer to ease the migration from COM to .NET for Web service providers and consumers. This layer allows RPC, text-based remote calls to a COM+ component through .NET Remoting and the SOAP protocol.

Some reasons to use Web services:

  1. Provides cross-platform capability.
  2. Takes advantage of Web service infrastructure.
  3. Loose-coupling creates a service oriented architecture that is more conducive to enterprise development.

On the server side, COM+ 1.5 Web services simplifies configuration and access of remote endpoints. It trivially exposes a component's methods as an XML Web service, without having to change a byte of source code. From the client side, it offers better integration for accessing Web services. Depending upon the activation mechanism, code to invoke methods upon a target Web service component on another machine can look just as if the target lived within the consumer's own assembly.

.NET Remoting and COM+ Web Services

While closely related, .NET Remoting and COM+ Web services serve different functions. COM+ Web services uses .NET Remoting, which is specifically optimized for configurations with .NET running on both the client and server. In a type of environment where you can ensure that both the client and server are running managed consumer and target code, it provides a rich, persistent, and stateful connection similar to DCOM. Even with managed code on both the client and the server, you can use COM+ Web services to communicate cross-machine over SOAP.

.NET Remoting can also be used less optimally (through the SOAP formatter) in environments where the .NET Framework is not on the client. .NET Remoting does not use DCOM and is a less-restrictive alternative for remote method calls between managed clients and servers. .NET Remoting and SOAP combine to allow a more flexible option than DCOM, which would be the only other option if managed or unmanaged COM+ components were called remotely.

Even if you have the .NET Framework on the client and server, remote calls to ServicedComponent components (and unmanaged components) will occur over DCOM by default. To override DCOM, a client configuration file must be loaded from the client machine to allow activation through .NET Remoting. This client file makes client COM+ catalog changes that tell COM+ Web services this is a .NET Remoting SOAP-based activation. If the catalog says to use DCOM, the remoting configuration file will be ignored. Conversely, if the catalog says to use .NET Remoting over SOAP and the configuration file is missing, the activation will fail. This is done transparently for unmanaged components using COM+ Web services. (Again, realize that DCOM is the only protocol that will allow a transaction or security to flow from an activator component to the component it uses on another machine).

A message formatter encodes and decodes calls as they are sent across the channel. .NET Remoting uses either the SOAP or binary message formatter. COM+ Web services uses .NET Remoting through the SOAP formatter. This is slower and less efficient than the binary formatter, but in exchange you get the flexibility needed for unmanaged (and managed) clients or servers. The binary formatter uses a more efficient binary protocol but is not used by COM+ Web services. If the need arose, you could use other formatters with your COM+ or ServicedComponent-managed class. You'll have to write the logic to do this, or just change the generated configuration file in the vroot.

Figure 7 is a diagram of how a remote request is serialized and deserialized by the .NET SOAP message formatter.

Figure 7. Flow and processing of a SOAP message

How to Expose a Component as a Web Service

Before exposing your component as a Web service, make sure the assembly has a strong name. Note that any .NET class must have a strong name to install properly into the COM+ catalog or the GAC. To create a strong name, run sn –k myKeyFile.snk from a Visual Studio .NET command-line prompt in the same directory the assembly that is to be signed resides. Within the assemblyinfo.cs file (created as part of your Visual Studio .NET project), include the file name to the AssemblyKeyFile attribute and recompile.

 [assembly: AssemblyKeyFile("mywebkey.snk")]

Install the assembly into COM+ through the CSE, and then add the newly recompiled assembly to the GAC through GACUTIL.EXE. From the Activation tab of an application's property sheet, find the SOAP group box. By merely enabling the check box, and specifying a new or existing IIS virtual directory name, that COM+ application can now function as a hosted SOAP endpoint. .NET Remoting does not provide automatic hosting of a remote component like DCOM does, but by using Com+ Web services, IIS will host it automatically for you. Once you hit Apply, a virtual directory (VD) in IIS is created as specified in the SOAP VRoot entry. If a VD is specified that already exists, it will use that directory. The newly created WINDOWS\System32\com\SOAPVRoots\MyCOMPlusWebService folder houses the files generated, and makes them accessible to developers on the Internet. The files generated are as follows:

  • Generates a .disco (discovery) file for use by Web service search engines.
  • The appropriate web.config file is created for the server to manage certain component attributes, such as its lifetime. When managing lifetimes, .NET uses the concept of a lease to control the lifetime of object references passed outside of an application. A time is assigned to the lease. When this value expires, the object's link to the .NET Remoting framework is broken. This instance is then eligible for garbage collection when all object references within that AppDomain have been released.
  • If the components are not .NET-managed code (say, Microsoft® Visual C++® 6.0), it will create a .NET interoperability assembly in the \windows\system32\com \soapvroots\myvirtualdirectory folder. If managed code, the assembly is placed in the Global Assembly Cached (GAC) to be shared by all applications on that machine. All the other files listed here are installed into the myvirtualdirectory folder.
  • Creates an .aspx file that presents a link to the XML WSDL (Web Services Description Language) describing the methods to the client developer. The WSDL language is a contract between the service and its clients. As a service provider, you define the services that you expose, the specific capability of each service, and the protocol that a client must follow to access this functionality. The server agrees to supply specific services only if the client sends an appropriately formatted SOAP request. The client downloads the WSDL file and uses the information contained to properly format the request for a particular capability of that service.
  • By navigating to the URL of this .aspx file, the WDSL information is displayed in the browser. This information is generated by the .aspx file dynamically—you won't find an actual WSDL file on the server. The component can now function just like any exposed Web service on the Internet.

The client inevitably determines the .NET Remoting protocol used. For a component exposed by COM+ Web services, this decision is dependent upon whether or not the SOAP activation box is checked when the proxy is created during the export process. Whatever proxy (if SOAP is enabled, it results in DCOM calls) is run on the client machine determines the protocol that will be used when the method calls are made. By exporting an application from the CSE with SOAP enabled, a client proxy .MSI file is created. When subsequently run on the client machine, it configures all method calls for that component to flow over HTTP and port 80 as SOAP messages (rather than RPC calls through the specified DCOM ports, which occurs if the SOAP box is not checked). The client does not need to specify a WSDL file in its code. It merely creates the object specifying only a ProgID or CLSID as usual and calls methods on that instance. Looking at the code, you cannot tell if the object resides remotely or is being accessed through SOAP. We'll look at this code in the Well Known Activation (WKO) section shortly.

.NET Remoting has two modes within which it can function using SOAP. With COM+ Web services, either option is available and again is determined by the client. This handles the different combinations of unmanaged and managed client and server code. You can use COM+ Web services and SOAP directly with a server-activated application. Indirectly, you can manually load a COM+ library application into a .NET executable's process, which opens up a remoting channel for the library application to be accessed directly through .NET Remoting, independently of COM+ Web services.

Client Activated Object (CAO)

This stateful mode of activation requires that the .NET Framework be installed on the client and the server machines, as well as the presence of a SOAP-enabled exported proxy on a client machine. The calls will be made using a type of .NET Remoting called Client Activated Object (CAO). The SOAP-enabled COM+ application must be exported as a client proxy and installed on the client machine by running the related MSI file. The object can then be accessed from the client with unmanaged code identical to creating and using a remote DCOM component.

Dim remObj as CustRemoteObj
Set remObj = CreateObject ("CustRemoteObj.1")
remObj. CustRemoteMethod ("input test string")

Let's further define what we mean by "stateful" within this context.

Proxy statefulness. CAO results in stateful server-side instances with respect to the proxy. The client code uses the same proxy throughout its lifetime to make all its remote calls on its held reference. This proxy lives across method calls as long as the object is still within scope in the client code.

Instance statefulness. As with a local activation, if the remotely activated component is configured in COM+ to use JIT, and its done bit is set for each method call return, you will get a different instance with each method call. If JIT (and the done bit) is not set, you will get the same instance for all calls coming from that proxy instance. Without JIT, CAO offers improved call performance over WKO (defined shortly), since only the first call incurs object activation overhead.

CAO instances are thus stateful in that the same instance of an object will persist across method calls as long as that remote component is not configured in COM+ to use JIT. This can offer performance benefits. However, to the client the consistent proxy gives the appearance of the object being stateful, regardless of whether it really is or not. Since these object references are stateful, these references could be passed around and shared (although not a great idea, architecturally speaking, in distributed applications!) through SOAP. The statefulness of an object is dictated by both its COM+ JIT setting and whether it's being CAO activated or not.

The lifetime of a CAO-activated object can also be controlled within the lifetime <application> section in the web.config file, which lives in its directory on its server. Lifetime control is not the same as its statefulness or statelessness affinity.

Well-Known Object Activation (WKO)

As you'll see shortly, WKO is a less restrictive .NET Remoting activation method than CAO. Without the presence of a SOAP-enabled proxy on the client, an alternative type of activation occurs called Well-Known Object (WKO) activation. When using WKO, the client code must specify the URL of the WSDL file when creating and initializing the object. A proxy is generated dynamically using the WSDL file at run time, and the URL for it is embedded in the proxy. This results in a stateless architecture, with a new instance servicing each method call. Unlike CAO, you can always associate WKO with stateless objects. WKO is used primarily from non-Windows clients, or clients that cannot run the .MSI export proxy to enable CAO activation.

Here is client code that uses WKO to load the WSDL file and dynamically generate the proxy at run time. In actuality, it is the new WSDL SOAP moniker that creates the proxy. To use this moniker, the .NET Framework is required on the client. If you cannot ensure this is the case, your clients can use the SOAP toolkit to access a WKO server.

MyLocator = 
"soap:wsdl=http://CustServerWithService/CustWebServObj/CustWebServer.dll?WSDL"
Set CustRemoteObj = GetObject (MyLocator);
result = CustRemoteObj.CustRemoteMethod I("Input string");

A SOAP-enabled COM+ 1.5 application automatically provides either CAO or WKO activation for the client. Again, the actual .NET Remoting method is determined by the client—specifically, by the presence or absence of a SOAP-enabled exported proxy on the client machine.

Which Activation Mechanism Should I Use?

The activation mechanism used is determined by the client/server configurations and various application metrics, such as statefulness and lifetime management. CAO is more restrictive and less commonly used as it requires the .NET Framework on both the server and the client machine. CAO offers the opportunity for statefulness and a persistent connection to a single instance that can be shared by multiple method calls. The lifetime of a CAO activated object is governed by its lifetime attribute in its web.config file, its lease expiration timeout, or by the JIT attribute and code of the component.

A drawback of CAO is potential scalability concerns where CAO activated instances not using JIT are not marked for immediate release after every call. If managed code is CAO activated, garbage collection will not occur until the lifetime specified in the web.config file has expired, even if the client is done with the object significantly earlier than that timeout period. In a heavy load scenario, this could lead to potential scalability issues, especially if there are instances holding contentious resources. Thus, if using CAO, try to limit the lifetime, and use JIT to deactivate the component immediately after the last method call.

WKO is more flexible and more commonly used since it does not require the .NET Framework on both client and the server. In fact, this is the activation model of choice for non-Microsoft clients. A WSDL file is used to publish a service through WKO to clients. You don't get the benefit of the same object being used across method calls, so if statefulness is needed for the client, you will want to avoid WKO. The overhead of activation on each method call can be significant, especially if the called object needs to acquire or contend for shared resources each time it is created. One approach to overcoming the activation cost of WKO, is to use object pooling and JIT on your component. When a method call is made, an instance of that class is pulled out of its class-specific pool, services the incoming call, deactivates, and is returned to the pool.

Now that we've discussed theory, let's look at a how to utilize COM+ Web services from both the client and the server side.

Server Side Web Services Usage

Figure 8 shows the Activation tab within the application's property page to enable the Uses SOAP check box. Specify the name of the IIS virtual directory in the SOAP VRoot text box.

Figure 8. COM+ Web services configuration

At the application level, export the application from CSE as an application proxy and make the location of this file available to any client machines that want to make SOAP calls on this object.

Client-Side Web Services Usage

After exporting the component, run the generated .MSI file on the client machine to enable CAO. This creates remote entries in the client's registry, including an AppID key. Additionally, it installs the application as a proxy application within CSE on the client machine. From within the application's property page, you can set the remote server (MMCKEOWN-XP2) where it will be activated. Figure 9 shows the remote server name field.

Figure 9. Specifying the remote server name

Using this approach, Microsoft® Jscript® or Visual Basic 6.0 (unmanaged) code can access a remote Web service using traditional COM activation mechanisms. This is CAO activation and works just as it would if the client was creating a local COM object (that is, CreateObject). You do not need to use the WSDL file and moniker as you do for WKO client activation situations where you don't have COM+ on the client or cannot get access to the .MSI proxy file.

Enabling components to be exposed as a Web service does not preclude them from being accessed through DCOM. If a client does not execute the .MSI SOAP-enabled proxy, or use the WSDL approach, the client can access the remote component through DCOM. They just need to ensure they make the same registry modifications that a non-SOAP exported .MSI file would make if run on the client. In that case, method calls would not be made using SOAP using HTTP over port 80. Rather, calls would execute using DCOM and firewall port issues would once again come into play.

SOAP Options

Before we wrap up, let's summarize the options for managed and unmanaged code for SOAP. Note that unmanaged code can use DCOM, and managed code can use the binary formatter, but neither DCOM nor the binary formatter use SOAP.

Unmanaged client code

  1. SOAP Toolkit. (See Note below.)
    • Direct SOAP calls.
  2. .NET Remoting (using SOAP formatter).
    • CAO.

      Needs COM+ SOAP Proxy Export .MSI to run on client.

      Needs Windows XP/Windows Server 2003 on client.

    • WKO.

      Needs .NET Framework on client to use the WDSL moniker.

      .

Note The use of the SOAP toolkit is specifically for unmanaged code. There is no reason why managed code should ever use this because the toolkit itself is unmanaged code and you have much better options with managed code. Note that although SOAP is an industry-standard, to use it for COM components, you need a WSML (Web Services Meta Language) file as well. This Microsoft-specific file maps the exposed methods of a Web service to method calls on COM objects. Once you build and register the COM component and its type library, you run it through the WSDL/WSML generator. With the DLL as input, the generator parses the type library and generates a list of COM objects and interfaces. After you select the appropriate COM object(s), the generator produces the appropriate WSDL and WSML files as output.

Managed client code

  1. .NET Remoting (using SOAP formatter).
    • CAO.

      Needs COM+ SOAP proxy exported .MSI file to run on client.

      No changes to source code; activate just as if local.

      Needs Windows XP/Windows Server 2003 on client machine for COM+ 1.5. Does not work for Windows 2000 or Windows NT clients.

    • WKO.

      WSDL file—no export proxy run on client.

      Source code uses moniker to reference remote WSDL file.

      .NET runtime must be on client (will be there anyway for managed client).

      Does not care which Windows operating system is running.

  2. .NET Remoting (in source code).

Conclusion

COM+ 1.5 takes Microsoft's enterprise component architecture to the next level. Scalability is improved through application pooling and the ability to adjust the transactional isolation level for database operations. For system administrators, the ability to disable or pause an application for updates or to use the new process dump feature increases manageability. The capability to recycle an application based upon predefined triggers, and limit activations in dangerously low-memory situations with memory gates, increases an application's availability. Web services and Services Without Components add powerful functionality to client code that otherwise may not be able to use component services. Component aliasing, along with the ability to mark components as private, maximizes application architecture possibilities. More control over process initialization adds better resource management.

You are not required to use the .NET Framework and managed code to take advantage of COM+ 1.5. But .NET Enterprise Services namespace makes access to component services a lot easier on either Windows 2000 and COM+ 1.0 or Windows XP/Windows Server 2003 and COM+ 1.5. You can install unmanaged and managed components into COM+ 1.5. Again, note that any .NET class requires a strong name to be installed in the CSE as well as into the GAC.

References

Transactional COM+, Tim Ewald, Developmentor Press.
50 Ways To Improve Your COM and MTS Applications, Don Box, Tim Ewald, Keith Brown, Chris Sells.
COM Threading and Application Architecture in COM+ Applications, Mike McKeown, MSDN.
Preserving Performance When Porting From MTS to COM+.Mike McKeown, MSDN.
Discover Powerful Low-level Programming in Windows XP with New COM+ APIs, Tim Ewald, MSDN.
Understanding Enterprise Services (COM+) in .NET, Shannon Paul, MSDN.
Windows .NET Server and Enterprise Services, Ron Jacobs, MSDN.

About the Author

Mike McKeown has been with Microsoft for 10 years with 25 years in the IT industry. He has worked in the COM+/MTS/COM+ world since 1994. He helped design the Microsoft specification for ActiveX controls and containers. He has published numerous articles on ActiveX COM, MTS, and COM+ subjects on MSDN, in MSJ, and in MIND magazines. Mike has presented sessions at Tech Ed and numerous software development conferences, and taught distributed application classes for 2 years. For the past four years, as a member of the Microsoft SIE team, Mike has worked onsite with numerous customers helping them troubleshoot and optimize their COM+ and distributed architectures.