Managing Add-Ins: Advanced Versioning and Reliable Hosting

The Architecture Journal

by Jesse Kaplan

Summary: A new architecture and features for a new managed add-in model will be introduced in the Microsoft .NET Framework 3.5, which will ship concurrently with Microsoft Visual Studio "Orcas" (code name). The next version of the Framework continues our strategy of shipping newer, additive versions at a faster pace than we ship full revisions of the underlying runtime. As 3.0 was a set of additional assemblies on top of 2.0, 3.5 will be a set of new assemblies on top of 3.0.

Solution Architects and application developers wanting to add extensibility face a common set of technical hurdles. The System.AddIn assemblies that are introduced in this next release aim to address two classes of problems that arise when adding extensibility. In this article, we detail how the new managed add-in model and assemblies address each problem set. We'll then describe some advanced hosting techniques that can be used to ensure resiliency and reliability in the face of misbehaving add-ins. (12 printed pages)

Contents

Managed Add-In Model Architecture and Versioning
System.AddIn Support for Managed Add-In Hosting
Reliable Hosting Techniques
Wrapping Up
Resources

 

Solution Architects and application developers are typically interested in extensibility for several reasons. One common reason is that their customers each have very specific feature requests that the application itself could never keep up with, so they offer extensibility to let the customers fill in the needs themselves or buy them from a third party. Another common reason for offering extensibility is to build a rich community ecosystem around the product. By offering an extensible application you can engage the community with the product by offering the opportunity to contribute to it in the form of these extensions. Regardless of the reasons for wanting extensibility, once application developers go down this path, they'll face a common set of technical hurdles.

Starting with the next release of the .NET Framework, Microsoft is introducing the System.AddIn assemblies to address two classes of problems that many customers are hitting when they try to add extensibility to their application. We call these the "Version 1" and "V.Next" problems. Version 1 problems refer to set of issues a developer runs into when first adding extensibility to an application: This includes common tasks such discovering, activating, isolating, and sandboxing the add-ins. The V.Next problems on the other hand refer to the set of issues the developer faces as the application changes: keeping old add-ins working on new hosts, getting new add-ins to run on old hosts, and even taking add-ins built for one host and running them on a different host. The new managed add-in model, and the architecture it defines, addresses these V.Next problems, and the features in our new System.AddIn assemblies make solving the Version 1 problems much easier.

Managed Add-In Model Architecture and Versioning

Once an application has shipped (actually, this often happens before it has shipped), developers typically start mapping out all the changes, improvements, and additions they want to make to the application in the next version. The problem developers of extensible applications have to face is that they need to figure out how to do these things while still keeping all the add-ins written against previous host versions working on this new version.

Today this is typically done, in the best cases, by defining new interfaces that inherit from the old ones and have the new host and new add-ins try/casting to see if the objects they are getting back actually inherit from new interfaces or just the old ones. The problem is that this forces both the host and the add-ins to be intimately aware of the fact that these interfaces have changed over time, and they both have to take on the complexity of being aware and resilient in the face of multiple versions. In the managed add-in model, we are introducing a new architecture that allows the hosts and add-ins to each program against their "view" of the object model and provides host developers a way to put the version-to-version adaptation logic in a separate and isolated assembly.

Pipeline Architecture

Figure 1 illustrates the architecture we've designed for facilitating versionable communication between hosts and add-ins. The goal is to provide a stable "view" of the object model to both hosts and add-ins with layers abstracting how that view is marshaled across the isolation boundary, how it is versioned, and even how it is consumed on the other side.

Figure 1. The host/add-in communication pipeline

Figure 1. The host/add-in communication pipeline

Each box in Figure 1 actually represents an assembly that contains many different types. Typically, for every custom object type exchanged between the add-in and the host, there will be at least one corresponding type in each view, adapter, and contract component. The host and the add-in are the entities which add real functionality to the system, while the other components are there to represent the object model, marshal it across the isolation boundary, and adapt it as necessary.

Looking at Figure 1, several very important properties of this architecture begin to pop out. First and foremost is the fact that the host/add-in only depends on its view; the view has no dependencies on other pipeline components. This provides the abstraction barrier that allows the host to be completely agnostic of how the add-in, and the communication mechanism between them, is actually implemented—and vice versa. In fact, if you look at the static dependencies, you will note that the entire center section—from adapter to adapter—             can be completely replaced without the host or add-in even being aware of it.

Another important facet of this architecture is that this model is in fact completely symmetric across the isolation boundary—a fact illustrated in Figure 2. This means that from the architectural standpoint there is no difference between how hosts and add-ins communicate with each other and version over time. In fact, in our model the only real difference between a host and an add-in is that the host is the one that happens to activate the other: Once activation has taken place, they become nearly equal peers in the system.

Figure 2. Symmetry of communication pipeline

Figure 2. Symmetry of communication pipeline

Now that we've described the overall architecture, we can delve into specifics of each pipeline component type and its role in the system.

·         The view component—The "view" component represents, quite literally, the host's and add-in's views of each other, and the types exchanged between them. These components contain the abstract base classes and interfaces that each side will program against and are, in essence, each side's SDK for the other. Hosts and add-ins are statically bound to a particular version of these view assemblies and are thus guaranteed to be able to code against this static view regardless of how the other side was implemented. In V1 of an application, these view assemblies can be very similar (in fact, they can be the same assembly); but, over time, they can drift. In fact, even in some V1 applications, it can be very useful to have a radically different view for each side. It becomes the job of the other components to make sure that even radically different view assemblies can communicate with each other.

·         The contract component—The "contract" component is the only component whose types actually cross the isolation boundary. Its job is a little funny in that it's not actually a contract between the host and the add-in: It can't be, because the host and the add-in never know which contract component is actually used. Instead, the "contract" component actually represents a contract between the two adapters and facilitates the communication between them across the isolation boundary. Because this is the only assembly that gets loaded on both sides of the boundary, it is important the contract assembly itself never versions: If the adapter on one side expected one version of a contract and the adapter on the other side expected a different version, the contract would be broken and communication would fail.

Although we say a contract can never version, we are not saying that the host and add-in must always use the same one. Since the contract is in fact a contract between adapters, and not the host/add-in, you can change the contract whenever you want; you simply need to build new adapters that can handle that new contract.

The contract's special role in our system requires restrictions on the types that are allowed to be contained inside. At a high level, all types defined in the contract assembly must either be interfaces implementing System.AddIn.Contract.IContract or be serializable value types implementing version-tolerant serialization. In addition, the members of these type—most visibly, the parameters and return values on their methods—must also be types following the same rules. These restrictions ensure that all reference types that cross the boundary implement at least the base set of functionality in IContract required by the system and that the value types are serializable and can be serialized across versions.

·         The adapter component—The adapter component's job is to adapt between the view types and the contract types. In reality there are two types of adapters in each adapter component: view-to-contract and contract-to-view. In almost all cases both host and add-in side adapters contain both types since, in most object models, objects are passed both from the host to the add-in and from the add-in to the host. The adapter does its job by implementing the destination type in terms of its source type.

View-to-contact adapters must adapt from a view type and into a contract type and thus they will take the view type into their constructor and implement the contract interface by calling into that view. If the input or return values to the methods on the contract interfaces are not primitive types that can be directly passed to and returned from the view, then it will be the adapter's job to call upon other adapters to deal with those types. The contract-to-view adapter does the exact opposite job as the view-to-contract adapter and takes in the contract into its constructor and implements the view by calling into that contract. Again, it takes on the responsibility for calling other adapters when the input/return types warrant.

Now that we've defined the individual pieces of the pipeline and their roles, we can put them all together and show how a type gets transferred from one side to another:

·         Start with the instance of the type to be passed: defined by the host or the add-in.

·         Pass the instance to the adapter typed as its view.

·         Construct a view-to-contract adapter with the instance.

·         Pass the adapter across the isolation boundary typed as the contract.

·         Construct a contract-to-view adapter using the contract.

·         Pass the adapter back to the other side typed as the view.

This may seem like a lot of overhead, but it actually only involves two object instantiations. In the context of a call across an isolation boundary (AppDomain or Process), the performance overhead is generally much less than the margin of error of the measurements.

In the steady state, all of the above steps are handled by the chain of adapters already active in the connections between the host and add-in. The system itself only steps in to activate the add-in, pass it back to the host, and thus form the first connection. This activation step is just an example of an object being passed from the add-in side to the host (Figure 3). Notice this diagram actually has slightly different names for some of the boxes because it is referring to specific types within each pipeline component. We only use these names for the types used in the activation pathway because these are the only types in the assemblies that the system infrastructure actually deals with directly.

Figure 3. Activation pathway

Figure 3. Activation pathway

Interesting Versioning Scenarios

Typically, the host's view changes over time as new versions of the application are released. It may request additional functionality or information from its add-ins, or the data types that it passes may have more or fewer fields and functions. Previously, because of the tight coupling between the host and add-ins (and their data types), this generally meant that add-ins built on one version of an application couldn't be used on later versions and vice versa. Our new architecture was designed from the ground up to provide the right abstraction layers to allow hosts and add-ins talking to radically different object models to nevertheless communicate and connect with each other.

·         New host/Old add-in—The first versioning scenario that comes into everyone's mind is a new host, with a revised object model, trying to run add-ins built against previous versions of the host (Figure 4). With our new model, if the host view changes from one version to the next, the developer just has to create a second AddInSideAdapter, which inherits from the new Contract and converts them to the AddInView version.

Figure 4. Backward-compatibility: V1 add-in on V2 host

Figure 4. Backward-compatibility: V1 add-in on V2 host

The benefit of this approach is that the old add-ins will just keep working, even with the new changes. The application itself doesn't have to keep track of different versions of the add-ins and only deals with a single host view; the different pipeline components connect to either the old or new add-in view. In these cases, the host is tightly coupled to its view, the add-in is tightly coupled to a different view, but versioning is still possible because those views are not tightly coupled to each other.

·         Old host/New add-in—It will be possible to write an AddInSideAdapter that converts a newer AddInView to the older Contracts. These transformations will be very similar to the ones required to get older add-ins running on newer hosts. (See Figure 5.)

Figure 5. Forward-compatibility: V2 add-in on V1 host

Figure 5. Forward-compatibility: V2 add-in on V1 host

·         Other pipeline scenarios—In addition to these backwards-/forwards-compatibility scenarios, other interesting scenarios are enabled by this architecture. One such scenario would be for the pipeline developer to build two separate pipelines for the same view types and optimize them for different isolation boundaries—for example, to build one pipeline that is very fast and used for AppDomain isolated add-ins, and use a different one for out-of-process that handles the threading issues.

Or you could go in the opposite direction: Instead of building a pipeline to connect one version of a host to add-ins built against a different version, build one that connects add-ins built for one host to a completely different host. This scenario highlights an important property of this architecture: The developer of the pipeline can be independent from both the host and the add-in, allowing third parties to fill in gaps. If the host developer decides to stop supporting compatibility with the older add-ins, they simply stop building adapters for them, but a customer of the new host (or a hired contractor) could still build and install that adapter, extending the life of the original add-in.

System.AddIn Support for Managed Add-In Hosting

Defining an Object Model, and Making It Easy for Add-In Development

When thinking about exposing extensibility points in their applications, the first concerns of many developers are about defining the object model they wish to expose and making sure that the experience for add-in developers is as simple as possible.

We very much believe that the implementation of the host application should be separate from the object model it exposes to its add-ins: This belief led naturally to a design where the exposed object model lives in an assembly separated out from the host (we call it the "view" assembly) and contains a set of classes—typically, abstract base classes and interfaces—that represent the host's and add-in's views of each other and of the objects they exchange between them.

This definition of a view assembly with abstract base classes and interfaces leads to a programming model for add-ins that is easy and has low overhead: simply add a reference to the view assembly and then inherit-from/implement the abstract base class or interface the host defined for them. The base class or interface the add-in host specified that the add-ins inherit from is called the "AddInBase." In the simplest case, add-in developers just compile their assemblies and drop them in a directory the host is looking at. Usually the add-in developer will also apply a custom attribute to the add-in to identify it to the host, giving the host information about the add-in before it decides to activate it.

Discovery and Activation

Once the application developer has defined the object model, the next steps are to figure out how to discover the available add-ins and activate the ones it wants. Applications will typically perform these actions at startup or in response to a user request, so the performance of these operations is important. Another common requirement for discovery is to be able to query the system for the available add-ins and get information about each add-in before deciding which add-ins to activate.

System.AddIn exposes this functionality through two main classes: AddInStore and AddInToken. AddInStore contains a set of static methods used to find the available add-ins given the type of add-in requested and a set of locations to look for the add-ins. These methods return a collection of AddInTokens which represent available add-ins and provide information about those add-ins. Once you have decided which AddInTokens to activate, you simply call the Activate method on the AddInToken and receive an add-in of the type requested.

In its simplest form, discovery and activation can be performed with just a few lines (in truth, one line of code is missing from this sample):

IList<AddInToken> tokens =

     AddInStore.FindAddIns(typeof(AddInType),

     addinPath);

foreach (AddInToken token in tokens)

{

    token.Activate<AddInType>(

    AddInSecurityLevel.Internet);

}

 

In System.AddIn, we've made sure that applications can include the discovery and activation of add-ins on their startup paths and still have a good user experience. The FindAddIns method is generally safe to call from startup as it doesn't actually load any assemblies (or even read them from disk); instead, it looks in a cache file for the specified directory. This cache file includes information on all the available add-ins in a specific directory as well as the AddInBase of each add-in. Thus, FindAddIns can quickly list out all the available add-ins of a given type in each provided directory by looking only at the single cache file rather than looking at all the assemblies in each directory and enumerating through the types in each assembly looking for add-ins.

Of course, now that FindAddIns provides the guarantee that it only ever looks at a cache file, the question is: When does this cache file get generated? This is where the missing line comes in:

AddInStore.Update(addinPath);

 

AddInStore.Update will first determine if the cache of the provided location is up-to-date. If the cache is current, then the method will return very quickly; if not, then it will need to rebuild the cache. We also provide a tool as part of the framework (AddInUtil.exe) that makes it easier to update the cache as part of an installation program. By separating out FindAddIns and Update, and providing a command line tool that performs the same action, we give host applications a lot of flexibility.

If startup performance is critical, the application can decide to only call FindAddIns on startup and either require that add-in developers use AddInUtil as part of their installation or give users the option to initiate an update of the cache and refresh of available add-ins. On the other hand, the application developer can decide to take a performance hit that only occurs when new add-ins are installed, and call AddInStore.Update on startup.

Isolation/Sandboxing

Of course, discovering the add-ins and deciding to activate them is only part of the story: You also need to ensure that the add-ins are activated in the desired "environment." Several things make up this "environment":

Isolation level:

·         In-AppDomain

·         Cross-AppDomain

·         Cross-Process

Pooling:

·         In domain with other, host-specified add-ins

·         In its own domain

·         Cross-process in a domain with other, host-specified add-ins

·         Cross-process in its own domain with other domains in the external process hosting other add-ins

·         Cross-process in its own domain, without any other add-ins running in that process

Security level:

·         One of the system defaults: Internet, Intranet, FullTrust

·         Custom PermissionSet specified by the host

By taking a look at the overloads available on AddInToken.Activate, you can see how easy it is for hosts to specify the desired environment. At the simplest case, the host simply needs to pass in an enum value specifying Internet, Intranet, or FullTrust, and we'll create an AppDomain with that trust level, activate the add-in in it, and then pass it back. The level of difficulty depends on how much control the host wants: On the high end, it can create an AppDomain on its own, fine-tune it to its needs, and then pass that to us for activation.

Reliable Hosting Techniques

Reliable Hosting

One issue of prime importance to many developers of extensible applications is the ability to maintain the reliability of the host in the face of buggy add-ins. In some cases, this is important because the host application itself is mission-critical and can't accept the prospect of down-time caused by an add-in; in other cases, the host developer just can't afford the cost of supporting customers coming to them for help due to bugs in third-party extensions. Some hosts need simply to be able to identify and disable the add-in that caused the crash while others need to make sure the host will keep running, regardless of what their add-ins do. Each host's reliability requirement must be assessed.

There are generally three categories of concerns for host developers when it comes to reliability: corruption of machine state, unhandled exceptions (crashes), and machine resource exhaustion. Each category poses its own unique problems and requires that different actions be taken.

Machine-State Corruption

In some ways, corruption of machine state can be the easiest problem to address because the system itself has the most support for preventing it. Generally, all the host needs to do is grant its add-ins as limited a security PermissionSet as possible and the .NET Frameworks Code-Access-Security system will ensure that the add-in stays within those bounds. The APIs in System.AddIn make this even easier either by letting hosts use one of the system defined security levels (Internet, Intranet, or FullTrust) or allowing them to define their own custom PermissionSet. The more system resources you want to protect from your add-ins, the fewer permissions those add-in have when they execute.

Unhandled Exceptions

Unhandled exceptions start to get more interesting as there are really two types of unhandled exceptions that the host has to be concerned about. The first type includes the exceptions that get thrown to the host during a call from the host into the add-in and on the same thread. To deal with these exceptions, the host needs to add a try/catch block around all of its calls into the add-in and react appropriately when an exception is thrown. The other exceptions are much harder to deal with as they are actually impossible for a host to catch: These are exceptions that occur on threads originating from the add-in or on threads where the host is not lower down on the stack.

Starting with the 2.0 version of the Common-Language Runtime (CLR), these unhandled exceptions will always be fatal to the process; thus, if care is not taken, it becomes very easy for a buggy add-in to take the host down with it. If the host's reliability requirement simply requires disabling problem add-ins (often a perfectly acceptable solution for client applications), the host can take advantage of something on the AppDomain class called the UnhandledException event. This event fires when a thread originating in the target AppDomain throws an unhandled exception that is about to take the process down. If the host simply wants to identify the misbehaving add-ins and disable them, it can subscribe to these events on the AppDomains its add-ins run in; when the event fires, it can record which add-in was running in that AppDomain. The host cannot prevent the process from being taken down, but it will be able to add the add-in to a disabled list and then restart the host.

If the host needs to ensure that a crashing add-in cannot take down the host, then it will need to take further measures and isolate its add-ins in a different process. There is an associated performance hit, but isolation ensures that only the add-in process gets taken down in the case of a failure.

Machine Resource Exhaustions

The final category of problems the host needs to be aware of is the exhaustion of system resources by the add-in. This can take the form of using excessive amounts memory or even CPU cycles. Within a process, there is no way to limit the resources of add-ins running in different AppDomains, but as soon as you move them into a different process, you can use the operating system to throttle those add-ins. Once you activate the add-in out-of-process, a few additional lines of code are required to achieve this throttling:

AddInProcess addInProcess = new AddInProcess();

AddInType addIn =

   token.Activate<AddIntype>(addInProcess,

   AddInSecurityLevel.FullTrust);

Process tmpAddInProcess =

    Process.GetProcessById(addInProcess.ProcessId);

//Lower the priority of the add-in process below that of the host

tmpAddInProcess.PriorityClass =

    ProcessPriorityClass.BelowNormal;

//Limit the add-in process to 50mb of physical memory

tmpAddInProcess.MaxWorkingSet = (IntPtr)50000000;

 

Hosts can go even further and use the operating system to monitor everything from the total CPU time consumed by the process to the number of disk I/O requests it is making per second, and then take appropriate action.

Wrapping Up

With the addition of the System.AddIn assemblies and the architecture they support, the Microsoft.NET Framework 3.5 delivers a complete managed add-in model, first promised back in PDC 2005. You can expect to see a few additions to our feature set in future betas and even more in future versions, but all you need to build fully extensible, versionable, and reliable hosts is available now and ready for you to use. To learn more about the topics covered in this article or for general information about this add-in model, please visit the add-in team blog on MSDN (see Resourcesjour12managedaddins_topic5).

Resources

CLR Add-In Team Blog

"CLR Inside Out: .NET Application Extensibility," Jack Gudenkauf and Jesse Kaplan, MSDN Magazine, February and March 2007.

Part 1

Part 2

Microsoft Visual Studio Code-Name "Orcas"

About the author

Jesse Kaplan is a program manager for application extensibility and runtime versioning on the Common-Language Runtime team. His past areas of responsibility include compatibility, managed-native (COM) interoperability, and metadata.

 

This article was published in the Architecture Journal, a print and online publication produced by Microsoft. For more articles from this publication, please visit the Architecture Journal Web site.