Performance Characteristics of Windows Workflow Foundation

 

Marc Mezquita
Microsoft Corporation

November 2006

Applies to:
   Windows Workflow Foundation (WF)
   Microsoft .NET Framework 3.0

Summary: Provides information about the performance characteristics of Windows Workflow Foundation. (54 printed pages)

Contents

Introduction
Major Performance Considerations
   Primary Workflow Performance Factors
   Workflow Runtime Services
   Workflow Performance Configuration Settings
Scenario-based Test Results
Performance Case Studies
Component-level Performance
Conclusion

Introduction

This document provides a general discussion of key performance considerations and modeling guidelines that are important when developing applications on top of the Windows Workflow Foundation. It describes the performance characteristics of several illustrative scenarios that include some of the key features in WF. It also includes performance considerations for individual components that help to guide your decisions so you can modify your design or deployment configuration to improve performance or optimize a specific application. You should not interpret the performance characteristics presented in this document as benchmark measurements that all systems can support. Only empirical testing on the target system can provide an accurate benchmark.

This document does not describe how individual features, components, or configurations affect the overall performance of any specific deployment or scenario. This document is intended to be a descriptive guide only; it does not provide prescriptive information or recommendations for optimizing a particular Windows Workflow Foundation scenario.

This document assumes that the reader is familiar with the introductory aspects of Windows Workflow Foundation. It contains four major sections: general workflow performance considerations, workflow scenarios, workflow performance case studies, and workflow component-level performance.

The general workflow performance considerations section describes the most important considerations for tuning and improving performance of WF-based applications. The workflow scenarios section presents three common workflow-based applications, explains the relevant performance characteristics and settings, and shows performance test results. The workflow performance case studies section shows key workflow performance characteristics through tables and charts. Finally, the workflow component-level performance section shows the performance of several WF components.

Major Performance Considerations

This section describes the most important performance considerations in Windows Workflow Foundation.

Primary Workflow Performance Factors

The following factors have the highest impact on performance of applications built with WF.

Persistence of a workflow instance state or a completed context activity

Most workflow applications require saving the workflow instance state at some point. For example, you might want to save the workflow instance state when a transaction scope has completed or to unload a long-running workflow instance that is idle and waiting for some event. In most cases, the workflow instance state will be saved into a durable store (like a Microsoft SQL Server table) to resume execution later.

When the workflow runtime encounters a persistence point during workflow instance execution, it calls the persistence service to do the work. The persistence service will then serialize the workflow state into a stream using BinaryFormatter serialization, optionally compress it, and save it into a durable store. Workflow serialization/deserialization and stream compression/decompression are CPU-intensive operations that can have a big impact on your application performance. The out-of-box SqlWorkflowPersistenceService uses the GZipStream class to compress the workflow instance.

The following are several ways a workflow state can be saved:

  • Initiated by the workflow runtime in one of the following scenarios:
    • An activity marked with the PersistOnClose attribute.
    • A workflow instance is idle while waiting for an event or a timer to expire and the UnloadOnIdle method in WorkflowPersistenceService class returns true.
    • The workflow runtime is stopped.
  • Initiated by the workflow host or custom service using TryUnload or Unload methods on the WorkflowInstance class.

Similarly, there are also several ways to load a workflow instance state in memory:

  • Initiated by the persistence service at workflow runtime startup:
    • The workflow runtime starts a persistence service during startup. For example, the default SqlWorkflowPersistenceService tries to load and run all unblocked workflow instances.
    • A workflow timer expires and the instance is not in memory.
  • Initiated by the workflow host or custom service:
    • Using any public control event in the WorkflowInstance class, such as Load, Resume, Abort, Terminate, or Suspend when the workflow instance is not in memory.
    • Raising an event through an ExternalDataExchange service when the workflow instance is idle and not in memory.

Due to the performance impact of persisting a workflow instance state, it is important to know the number of persistence points in your workflow and try to reduce them as much as possible while still ensuring fault tolerance and correct execution of your scenario. For example, you may want to add persistence only after a logical group of activities has completed instead of persisting for each single activity in your workflow.

Aa973808.workflowperfcharacteristics2(en-us,MSDN.10).jpg

Figure 1. Persistence points in a workflow

The following table lists several performance counters that can be used to monitor the number of persistence points of a particular workflow application.

Performance Counters
Workflows Persisted/sec and Workflows Persisted
Workflows Loaded/sec and Workflows Loaded
Workflows Unloaded/sec and Workflows Unloaded

Workflow instance state size

There is a direct correlation between workflow state size and persistence cost. Complex workflows use more resources in terms of CPU, memory, disk I/O, and disk space than simple ones. It is important in capacity planning to measure the size of the workflow instance when it is running in memory and when it is persisted in a workflow store.

The figure 2 below shows an example of how instance state size depends on the workflow model:

Click here for larger image

Figure 2. Workflow instance state size (click image to enlarge)

The SQL query below can be used to retrieve the size of a persisted workflow running with the default SqlWorkflowPersistenceService:

SELECT DATALENGTH(state) FROM [InstanceState]
WHERE uidInstanceID = @uidInstanceID

In addition to the internal workflow and activity state, fields added to the root and custom activities will add overhead when the workflow instance is persisted. All fields in the workflow and custom activities will be serialized unless marked with the System.NonSerialized attribute (this is the default behavior of the GetDefaultSerializedForm method in WorkflowPersistenceService class, which uses BinaryFormatter serialization). In some cases, it is possible to avoid expensive serialization of some fields by populating them at runtime.

Workflow activities that require Activity Execution Context cloning

The activity instance state is automatically managed by the workflow runtime through the Activity Execution Context (AEC). The workflow runtime uses the AEC to maintain activity instance state and to run compensation logic when required.

When an activity needs to be re-executed, a new AEC is created using the BinaryFormatter class. This operation can have a performance impact on your workflow application, especially in cases where the AEC being cloned is complex (for example, when there are multiple nested activities).

Some examples of default activities that create new execution contexts are: WhileActivity, ReplicatorActivity, ConditionedActivityGroup, StateActivity, and EventHandlerActivity.

The figure below shows a WhileActivity that spawns a new AEC during each iteration through the WhileActivity.

Click here for larger image

Figure 3. Activity execution context cloning with a WhileActivity (click image to enlarge)

For greater performance, activity writers should develop custom activities that are not as flexible as those in the base activity library, but satisfy specific business requirements and provide optimal performance.

Workflow activities with transaction support

Activities that have transaction support, such as the default TransactionScopeActivity and CompensatableTransactionScopeActivity, create a copy of the workflow instance initial state before execution. This copy is kept in memory and used if the transaction needs to be rolled back. After the activity has executed successfully, the updated instance state is persisted and the transaction is committed.

The workflow runtime handles transactions internally through the .NET System.Transactions infrastructure and therefore transparently escalates local transactions to distributed transactions (MSDTC) when necessary.

Consider implementing the IPendingWork and IWorkBatch interfaces for transactional work instead of the default transactional activities if holding transactions for long periods of time becomes a performance issue (see WorkflowCommitWorkBatch Service Sample in the Windows Workflow Foundation documentation). The default transactional activities hold a transaction open from the start of the activity to the end, so in some scenarios that could reduce efficiency and IPendingWork and IWorkBatch can be used as an alternative.

Workflow activities with compensation support

The serialization of a workflow instance state can become a performance issue with workflows that have nested compensatable activities. For example, if a compensatable activity is re-executed inside a WhileActivity, the workflow instance state grows as we iterate through the loop, increasing the instance state serialization cost.

Two out-of-box activities support compensation. The first one is the CompensatableSequenceActivity activity, which supports only compensation, and the CompensatableTransactionScopeActivity activity, which also supports transactions, similar to the TransactionScopeActivity activity. If only compensation support is needed and to avoid transaction support overhead, the CompensatableSequenceActivity activity should be used.

Workflow activities execution tracking

The number of tracking events per workflow can significantly affect performance. You should review your tracking profile and reduce the number of tracking events as much as possible.

Workflow activity complexity

Many different factors affect activity execution performance. In general, simple activities that derive from the base Activity class and have a few fields and properties provide the best performance and the least serialization overhead.

Some of the important factors that affect activity execution performance are:

  • Number of fields
  • Properties and dependency properties being accessed
  • Services being consumed
  • Workflows queues being used
  • Activity execution code overhead

The activity validation logic is also an important factor to consider, even though it affects only the first execution of the activity and can be disabled (see workflow settings in this section below).

Composite activities, such as SequenceActivity and EventHandlingScopeActivity, add more serialization overhead, because they are more complex than non-composite activities. A composite activity has to schedule the execution of child activities, so there is an extra performance penalty associated with these types of activities. In addition, all performance factors listed in the bulleted list and the previous paragraph also apply to composite activities.

Workflow activation cost

Workflow activation happens when a workflow host calls the WorkflowRuntime.CreateWorkflow method. The first time this method is called for a workflow type, an activity tree is created and the workflow's activities are validated. On the second execution, the activity tree is already in-memory and the workflow has been validated, so the performance cost of calling this method is significantly reduced and the workflow runtime only has to create activity objects in the graph to be ready for the execution.

It follows that workflow activation time increases with the number of activities in the workflow. The complexity of an activity explained in the previous section also affects workflow activation time.

Workflow ExternalDataExchange data and workflow parameters

Data passed to a workflow through an EventActivity is serialized and queued into a workflow queue. Therefore, there is a serialization/de-serialization cost associated with this operation and performance varies depending on the type complexity and number of arguments being passed to the workflow event activity.

The number of workflow input/output parameters is also a performance factor to consider. The performance overhead caused by the use of a few workflow parameters is usually minimal.

Workflow declarative rule conditions and policy activity

In general, code conditions are evaluated quicker than declarative rule conditions. Declarative rule conditions have to be extracted from the workflow assembly manifest resource and de-serialized before execution. Code conditions are not subject to this additional processing.

In many cases, declarative rule conditions provide adequate performance, but in some performance-critical applications it is recommended that you avoid them in favor of code conditions.

If the policy activity or rule conditions performance is an issue in your application, it is possible to write a workflow service that caches rule definitions and then use a custom activity to retrieve a rule set definition from the cache and to execute it. In other cases, it is a better to build the rule set in code. The use of the RuleEngine class is the optimal way to execute rule sets.

Rule set priorities and the chaining behavior should be carefully set because they can have a big effect on your application's performance.

Workflow instance dynamic update

Dynamic update is the mechanism of updating a workflow instance at runtime. To prevent race conditions, the workflow is first suspended before authoring and applying the dynamic changes from the host application.

Dynamic update is an expensive operation due to the suspending of the instance, cloning, validation, and application of the changes. This useful feature should be used with the performance implications in mind and should be carefully considered for high-throughput workflow applications.

Workflow dependency properties

A dependency property is a powerful mechanism that helps you expose activity properties so they can be bound and shared within a workflow instance. There is a small performance overhead associated with this property type, especially when many dependency properties are being accessed from the workflow, but in most cases the overhead should be minimal.

Dependency properties should be avoided when regular properties or fields can be used.

Workflow state machine root activity

Workflow state machine transitions in and out of deeply nested states are more expensive due to activity tree navigation and AEC cloning. You should try to avoid having deeply nested states as much as possible.

Workflow Runtime startup cost

Creating a WorkflowRuntime object and calling the StartRuntime method is an expensive operation due to retrieving the runtime configuration parameters and starting all default and custom workflow services, such as persistence services, tracking services, or transaction support services.

Ideally, you should use only one WorkflowRuntime object per app-domain and reuse it for workflow instance execution to minimize the creation and startup performance overhead.

Workflow Runtime Services

Windows Workflow Foundation includes several services that provide important functionality to the runtime. This section describes the most important performance considerations related to these services.

Scheduler services

The workflow runtime requires a scheduler service to drive workflow execution. Windows Workflow Foundation comes with two different scheduler services that should be sufficient for most scenarios, but it is also possible to write one that better suits your requirements.

DefaultWorkflowSchedulerService

The DefaultWorkflowSchedulerService queues work items coming from the workflow runtime into a local queue before handing over execution to the managed thread pool. There is a performance counter available to monitor the number of workflows in the service queue currently waiting for a thread (Counter Name: Workflows Pending).

The managed thread pool consists of background threads with a default maximum of 25 worker threads per available processor. In addition, the minimum number of idle threads available is equal to the number of processors on the local machine. The DefaultWorkflowSchedulerService also controls the maximum number of items queued into the managed thread pool through the MaxSimultaneousWorkflows property.

All these settings are important performance factors that should be carefully tuned on any application. The default values for all these settings do not necessarily provide optimal performance for all applications.

In general, having too many running or idle threads can be a waste of resources, but in other cases, having too few threads can also cause performance problems. Finding the right balance is a performance-tuning issue that will vary depending on the application.

ManualWorkflowSchedulerService

The ManualWorkflowSchedulerService allows an application to start or resume workflow execution on a thread owned by the host application.

The workflow host application calls the RunWorkflow method to run a workflow to the next idle point or to completion.

The ManualWorkflowSchedulerService allows you to have total control of the threads that are running workflow instances in your application, so it is possible to change thread settings such as thread priority, manage number of active threads, and so on. This is the preferred scheduler service on ASP.NET/Web services scenarios.

Transaction services

The workflow runtime includes two different services used to perform transactional operations.

DefaultWorkflowCommitWorkBatchService

This is the default service being used by the workflow runtime for transaction support. The service uses the .NET Framework System.Transactions infrastructure to manage transactions, and therefore provides dynamic escalation (that is, it engages MSDTC only when it is actually required for a transaction).

SharedConnectionWorkflowCommitWorkBatchService

This service was created as a performance optimization to the default transaction service on applications using SQL Server to host the workflow persistence database. The service can be used only with the SqlWorkflowPersistenceService and SqlTrackingService services deployed on the same database. This allows the SharedConnectionWorkflowCommitBatchService to share a SqlConnection at persistence points and avoid dynamic transaction escalation to DTC on SQL Server 2000. So for this particular configuration, the SharedConnectionWorkflowCommitBatchService service provides optimal performance.

The DefaultWorkflowCommitWorkBatchService should be used in all other scenarios when no SQL tracking or SQL workflow instance data store is required or when using the default SqlWorkflowPersistenceService and running under SQL Server 2005.

Persistence services

A persistence service is used to keep workflow instance data in a durable store, such as a SQL database. It is normally one of the most performance critical parts of any workflow application with state management requirements.

SqlWorkflowPersistenceService

Windows Workflow Foundation comes with an out-of-box persistence service that stores workflow instance data on a SQL server database.

Adding the SqlWorkflowPersistenceService to the workflow runtime creates some performance overhead due to the execution of a SQL stored procedure at workflow completion to delete all instance-related data from the workflow persistence database. On top of this, if the workflow instance has several persistence points, there is going to be a serialization/de-serialization and compression cost associated with persistence. Also, there is the cost of the SQL stored procedure calls to write or read data on these persistence points.

When the SQLWorkflowPersistenceService starts, it loads all unblocked workflow instances from the database. Unblocked workflow instances are those that are not idle and, if the OwnershipTimeoutSeconds parameter is specified, are either owned by the current host or not owned by any host. Loading all unblocked instances can be an expensive operation, especially when there are many of them in the workflow persistence database. Similarly, if the workflow runtime stops, all active instances in memory are unloaded.

The UnloadOnIdle property in the SQLWorkflowPersistenceService class determines whether to unload workflow instances immediately when they become idle.

It is also possible to enable retries in case of error when retrieving the workflow instance state.

The SQLWorkflowPersistenceService provides scalable support of workflow timers. Timers can be configured to be pulled from the database on a regular basis through the LoadingInterval property specified in seconds. This can have a performance impact when multiple timers have to be retrieved from the database at once.

Finally, the SQLWorkflowPersistenceService provides workflow instance locking to support scenarios where multiple workflow hosts access the same workflow persistence database. To enable this scenario, there is a startup parameter in the service called OwnershipTimeoutSeconds, which tells the host how long it can own a given workflow instance. During this time, other hosts cannot access the instance because it is locked.

Tracking services

A workflow tracking service is used to monitor workflow and activity runtime execution events. The workflow runtime supports adding and using multiple different workflow tracking services.

SqlTrackingService

The SqlTrackingService writes workflow tracking event data to a SQL database.

It is important to monitor the Tracking database for disk consumption and growth. You should archive and clean up the Tracking database regularly. The SqlTrackingService provides support for creating multiple partitions, which make it easier to archive and drop old tracking data.

Follow these steps to enable SQL tracking partitioning and to archive or drop a partition:

  1. Set the PartitionOnCompletion property in SqlTrackingService to true. You can alternatively use the PartitionCompletedWorkflowInstances SQL stored procedure to do batch partitioning on workflow applications that have a period of low usage or downtime. This option can have a higher impact on runtime performance, but it gives you the ability to quickly archive or drop tracking data whenever it is needed.

  2. Set the partition interval by executing the following stored procedure:

    EXEC SetPartitionInterval 'd'
    

    (The variable "d" will create a new partition daily.)

  3. Run a backup script on the Tracking database or run a bcp command to archive all tracking data tables.

  4. Optionally, you can drop old partitions by calling the following stored procedure:

    EXEC DropPartition @PartitionId
    

    Note   You can get all inactive partitions by running the following query:

    SELECT * FROM dbo.TrackingPartitionSetName
    WHERE EndDateTime IS NOT NULL
    

The SqlTrackingService uses a default tracking profile that includes the following tracking points:

  • Activity execution status events: Initialized, Executing, Compensating, Canceling, Closed, and Faulting.
  • Workflow status events: Created, Completed, Running, Suspended, and Terminated.

The more events being tracked by the profile, the more overhead tracking has in your application.

The SqlTrackingService runs by default in batched mode (that is, the IsTransactional property is set to true), which means that workflow instance tracking events are written to the database on the same batch as the workflow instance state. This generally provides the best performance over the widest range of workflow designs. However, this can have a negative performance impact if a workflow runs many activities without persisting and all activity events are being tracked. For example, a WhileActivity that iterates many times and never persists will have a large performance cost when persistence finally occurs. When the workflow eventually does persist and the tracking records are flushed to the database, it may take some time to complete the writes. It may be necessary to design the workflow with a persistence point somewhere in the body of the WhileActivity to avoid such a scenario.

The SqlTrackingService can also run in non-batched mode (that is, the IsTransactional property is set to false) so data is flushed to the database for each tracking event.

Modifying the tracking profile to track members in a workflow has some performance implications. It involves reflection calls to obtain the object, binary serialization of the object, and finally an extra SQL call to write the object to the tracking database. Due to this additional work, it is important to find a balance between the amount of data that needs to be tracked and the impact on the system as a whole.

Workflow Performance Configuration Settings

All of the following are important performance-related settings that you can use to tune your workflow application. Most of them can be set through class properties, an application configuration file, or a class constructor.

Property name

(Class Name)

Value Description
EnablePerformanceCounters

(WorkflowRuntimeSection)

true / false Enables/Disables workflow performance counters
ValidateOnCreate

(WorkflowRuntimeSection)

true / false Enables/Disables workflow activity validation on first execution
EnableRetries
(DefaultWorkflowCommitWorkBatchService)

(SharedConnectionWorkflowCommitWorkBatchService)

true / false Enables transaction retries on error at persistence points
MaxSimultaneousWorkflows

(DefaultWorkflowSchedulerService)

Integer (4 * Num of Processors) Sets the maximum number of workflows being executed simultaneously
EnableRetries

(SqlWorkflowPersistenceService)

(SqlTrackingService)

true / false Enables SQL query retries on error
LoadIntervalSeconds

(SqlWorkflowPersistenceService)

Integer (in seconds) (120) Frequency of timers loading
IsTransactional

(SqlTrackingService)

true / false Specifies whether the workflow instance tracking events should be batched together
UseDefaultProfile

(SqlTrackingService)

true / false Specifies whether the default tracking profile should be used
PartitionOnCompletion

(SqlTrackingService)

true / false Specifies whether instances should be moved to a partition when they completed. They can always be moved later by running the stored procedure PartitionCompletedWorkflows.
OwnershipTimeoutSeconds

(SqlWorkflowPersistenceService)

Integer (in seconds) (0) Specifies the maximum number of seconds a host can own an instance
DisableWorkflowDebugging

(System.Diagnostics - switch in a workflow configuration file)

true / false Enables/Disables workflow debugging

Note   Values listed in bold are the default values for those properties.

Scenario-based Test Results

This section describes three important workflow scenarios, including performance considerations and test results.

Scenario Deployment Topology

The following diagrams show the three different deployment topologies that were used on all the tests explained in this document.

Aa973808.workflowperfcharacteristics5(en-us,MSDN.10).jpg

Figure 4. Deployment topology for self-hosted workflow testing

Aa973808.workflowperfcharacteristics6(en-us,MSDN.10).jpg

Figure 5. Deployment topology for standard workflow web service testing

Aa973808.workflowperfcharacteristics7(en-us,MSDN.10).jpg

Figure 6. Deployment topology for scale-out workflow web service testing

Shopping cart web service scenario

The shopping cart scenario is an ASP.NET Web service that can be used from a commerce Web site to manage a user shopping cart.

The Web service consists of the following methods:

  • Guid CreateUserBasket ()
  • Guid AddListItemToBasket (int itemId, int quantity, decimal listPrice)
  • Guid RemoveListItemFromBasket (int itemId)
  • decimal ReviewOrder ()
  • decimal CheckoutOrder ()

A workflow will be used to implement the Web service. The different implementation options and their performance implications are discussed in the following sections.

This test shows the following key workflow performance characteristics:

  • General workflow modeling considerations to reduce state size and improve performance.
  • Activity Execution Context cloning performance impact.
  • SqlWorkflowPersistenceService considerations for scale-out deployments.
  • State machine workflow performance characteristics.
  • Performance impact of unload/load pattern versus persistence and keeping the workflow instance in-memory.

Implementation 1

On this first implementation, we have a persistence point right after each shopping cart update. The following figure shows the scenario flow in this case:

Aa973808.workflowperfcharacteristics8(en-us,MSDN.10).jpg

Figure 7. Scenario data flow with persistence

This model allows us to have the workflow in memory most of the time and avoid data loss in case of process shutdown.

It is important to use a workflow service that unloads workflow instances when memory is low or after the instance has been idle for some time.

The following picture shows the workflow view in the workflow designer:

Click here for larger image

Figure 8. Workflow designer view of shopping cart web service scenario (Implementation 1) (click image to enlarge)

The PersistOnCloseActivity (Activity 1 in the figure above) is a custom activity that is decorated with the PersistOnCloseAttribute.

[PersistOnClose]
public partial class Activity1: SequenceActivity
{
    public Activity1()
    {
        InitializeComponent();
    }
}

The EventHandlingSequenceActivity (Activity 2 in the figure above) is a custom activity that has the following implementation:

[PersistOnClose]
public partial class EventHandlingSequence: SequenceActivity, IEventActivity
{
    public EventHandlingSequence()
    {
        InitializeComponent();
    }

    #region IEventActivity Members
    public IComparable QueueName
   {
         get { return ((IEventActivity)EnabledActivities[0]).QueueName;   }
}
    public void Subscribe(ActivityExecutionContext parentContext, IActivityEventListener<QueueEventArgs> parentEventHandler)
    {
        ((IEventActivity)EnabledActivities[0]).Subscribe(parentContext, parentEventHandler);
   }
    public void Unsubscribe(ActivityExecutionContext parentContext, IActivityEventListener<QueueEventArgs> parentEventHandler)
    {
        ((IEventActivity)EnabledActivities[0]).Unsubscribe(parentContext, parentEventHandler);
    }
    #endregion
}

The reason for using this custom activity that implements IEventActivity is to allow re-execution of the WebServiceInputActivity activities inside them. In this case, there is a small performance penalty related to AEC cloning.

Implementation 2

In this implementation, a ListenActivity inside a WhileActivity allowing WebServiceReceiveActivity re-execution is shown. This implementation has lower performance due to the more complex AEC cloning needed in the WhileActivity.

The following figure shows the workflow:

Click here for larger image

Figure 9. Workflow designer view of shopping cart web service scenario (Implementation 2) (click image to enlarge)

Both implementations 1 and 2 above can be only used in scale-out deployments with session affinity (that is, all requests coming from a browser session are being handled by the same workflow runtime). The next implementation will not have this limitation.

Implementation 3

On this implementation of the shopping cart Web service, the same workflow model from the first implementation is used, but instead of forcing persistence within the custom activity, the runtime unloads the workflow instance when it becomes idle. To enable this feature, the SqlWorkflowPersistenceService.UnloadOnIdle property is set to true.

The following figure shows the scenario flow in this case:

Aa973808.workflowperfcharacteristics11(en-us,MSDN.10).jpg

Figure 10. Scenario data flow with the UnloadOnIdle property set to true

In this case, the workflow instance state is loaded from the workflow persistence database for each request after the first one. This design has more performance overhead compared to the previous implementation, but makes the scenario easier to implement and allows scale-out deployments that do not require application session affinity.

The following SqlWorkflowPersistenceService configuration is being used in this case:

<add type="System.Workflow.Runtime.Hosting.SqlWorkflowPersistenceService, 
System.Workflow.Runtime, Version=3.0.00000.0, Culture=neutral, 
PublicKeyToken=31bf3856ad364e35" unloadOnIdle="true" 
OwnershipTimeoutSeconds="60" />

Implementation 4

The shopping cart Web service can be implemented as a State Machine workflow with three states:

Click here for larger image

Figure 11. Workflow designer view of shopping cart web service scenario (Implementation 4) (click image to enlarge)

Each of the event driven activities in the ModifyBasketState has the same logic as in the sequential workflow implementation and includes a custom activity with the PersistOnClose attribute.

As shown in the performance results later, this implementation adds more performance overhead because the workflow has more activities than the workflow in the sequential workflow implementation, including the SetState activities to switch to a new state.

Shopping cart scenario performance test results

In this section, the performance test results for the different implementations of the shopping cart scenario are provided.

The performance test driver calls the Web service from a client machine simulating access of multiple concurrent users (up to 50 concurrent users) doing the following operations.

Method name Number of method calls per user session
CreateUserBasket 1
AddListItemToBasket 2
RemoveListItemFromBasket 1
ReviewOrder 1
CheckoutOrder 1

There are a total of six Web service calls per user session.

The following table shows the number of persistence points and workflow loads/unloads per instance with the different implementations and the user profile above.

Implementation Persistence points Load points Unload points
Implementations 1 and 2 6* 0 0
Implementation 3 6* 5 5
Implementation 4 6* 0 0

* Includes a persistence point at workflow completion to delete workflow instance data.

The following table shows the throughput and total % CPU usage while running these tests on the second configuration deployment (Figure 5) specified in the deployment topology section above.

Implementation Web service requests/sec Total % CPU WS server Total % CPU SQL server
Implementation 1 168.3 92.9 10.8
Implementation 2 113.4 93.9 9.2
Implementation 3 92.7 92.8 9.0
Implementation 4 114.8 94.3 8.1

The following table shows the maximum workflow state persistence size for the four different implementations.

Implementation Max workflow state size
Implementation 1 9.59 KB
Implementation 2 10.47 KB
Implementation 3 8.63 KB
Implementation 4 12.63 KB

These four scenario implementations can be used on scale-out deployments similar to the third configuration in the Scenario Deployment Topology section (Figure 6). The following chart shows the throughput that can be achieved by increasing the number of servers in the Web farm.

Click here for larger image

Figure 12. Shopping cart web service scenario scale-out test results (click image to enlarge)

Document review scenario

The document review scenario involves controlling a document approval/review process where multiple individuals need to submit review feedback. It supports dynamically adding new review participants and delegation.

The following figure shows the workflow designer view for this scenario.

Click here for larger image

Figure 13. Workflow designer view of the document review scenario (click image to enlarge)

The ReplicatorActivity in the figure above (replicator1) creates as many task activities as the number of participants in the review and executes them in parallel rather than sequentially. Therefore, the more participants in the document review, the larger the workflow state and the higher the serialization and persistence overhead.

This test shows the following key workflow performance characteristics:

  • Workflow throughput characteristics of a scenario that requires frequent instance loading/unloading with a medium-size workflow instance state.
  • The performance impact of adding SQLTrackingService with different settings.

Document review scenario performance test results

This test simulates a document review with three review participants. The document owner submits a message (DocumentReviewStarted) and waits for the feedback of three other review participants (DocumentReviewCompletion). Each participant has a limited time to complete the review and the document owner receives a notification when the review has completed.

For this scenario, the default SqlWorkflowPersistenceService is used and the workflow runtime unloads workflow instances immediately after they become idle. The test driver is a simple console application that acts also as a workflow runtime host. The driver sends DocumentReviewCompletion messages only after the workflow instance has been successfully unloaded, and when that happens the instance is loaded back in memory and the workflow executes.

The DocumentReviewStarted message is sent after creating the workflow instance, but before starting it, and therefore an unload point is avoided:

WorkflowInstance inst = this._container.CreateWorkflow(typeof(DLC.Workflow1));
// Raise DocumentReviewStarted event before starting the workflow instance.
this._docImpl.RaiseOnReviewStarted(onReviewStartedEventArgs); 
inst.Start();

The following table shows the test throughput results when running on the configuration 1 setup specified in the Scenario Deployment Topology section (Figure 4).

Test Name Messages/sec Workflows executed/sec Unload -Load points/sec Total % CPU WF Total % CPU SQL
Document Review

(3 participants)

76.2 19.05 57.13 93.9 7.05
Document Review

(3 participants) + SQL Default Tracking Settings

61.2 15.3 45.9 92 32.5
Document Review

(3 participants) + SQL Tracking non-batched mode

55.2 13.8 41.31 87.8 41.25

Employee level assignment scenario with the WF Rules Engine

The employee level assignment scenario is a set of rules to choose the level of the employee based on certain input parameters.

A RuleSet is generally assumed to be a declarative construct, in that it should not matter what order the rules are executed in because the results should be the same. The following example uses a set of rules to determine what job position level a candidate is eligible for.

Rule Condition
R01 IF Experience == "low" THEN Position="Intern"
R02 IF Experience == "fair" THEN Position="Junior"
R03 IF Experience == "good" THEN Position="Senior"
R04 IF Education == "incomplete" THEN Experience="low"
R05 IF Education == "good" AND YearsWorked > 5 THEN Experience="good"
R06 IF Education == "good" AND YearsWorked <= 5 THEN Experience="fair"
R07 IF Education == "high" AND YearsWorked > 2 THEN Experience="good"
R08 IF Education == "high" AND YearsWorked <= 2 THEN Experience="fair"
R09 IF Degree == "PhD" THEN Education="high"
R10 IF Degree == "Bachelors" OR Degree == "Masters" THEN Education="good"
R11 IF Degree == "None" THEN Education="incomplete"

Assume that the candidate has a Masters degree (Degree == "Masters") and has worked in the industry for 3 years (YearsWorked == 3). Everything else is unknown.

If you trace the execution of this rule set in order from R01 through R11, you will see the following behavior:

R01 is false, and does not execute

R02 is false, and does not execute.

R03 is false, and does not execute.

R04 is false, and does not execute.

R05 is false, and does not execute.

R06 is false, and does not execute.

R07 is false, and does not execute.

R08 is false, and does not execute.

R09 is false, and does not execute.

R10 is true, which sets the Education property to "good." This will cause forward-chaining to re-evaluate any rules whose condition depends on the Education property. Execution of the RuleSet will reset to R04.

R04 re-evaluates to false, and does not execute.

R05 re-evaluates to false, and does not execute.

R06 re-evaluates to true, which sets the Experience property to "fair." This will cause more forward-chaining to evaluate any rules whose condition depends on the Experience property. Execution of the RuleSet will reset to R01.

R01 re-evaluates to false, and does not execute.

R02 re-evaluates to true, which sets the Position property to "Junior."

R03 re-evaluates to false.

R04, R05, R06 are skipped because they have already been executed.

R07 re-evaluates to false, and does not execute.

R08 re-evaluates to false, and does not execute.

R09 and R10 are skipped because they have already been executed.

R11 is false, and does not execute.

At this point, the RuleSet has computed that the candidate's education is "good," his experience is "fair," and that he should be given a "Junior" position. However, it took 21 rule evaluations to come to that conclusion.

The problem with the RuleSet is that some rules were evaluated before enough information had been computed, and thus, they had to be re-evaluated when that information became available.

Priorities can be assigned to rules to provide some control over the order in which they are executed. Rules with higher priority numbers are executed before rules that have lower priority numbers. In the following example, priorities are assigned to the RuleSet to minimize (or in this case even eliminate) premature evaluation of rules.

Priority Rule Condition
2 R09

R10

R11

IF Degree == "PhD" THEN Education="high"

IF Degree == "Bachelors" OR Degree == "Masters" THEN Education="good"

IF Degree == "None" THEN Education="incomplete"

1 R04

R05

R06

R07

R08

IF Education == "incomplete" THEN Experience="low"

IF Education == "good" AND YearsWorked > 5 THEN Experience="good"

IF Education == "good" AND YearsWorked <= 5 THEN Experience="fair"

IF Education == "high" AND YearsWorked > 2 THEN Experience="good"

IF Education == "high" AND YearsWorked <= 2 THEN Experience="fair"

0 R01

R02

R03

IF Experience == "low" THEN Position="Intern"

IF Experience == "fair" THEN Position="Junior"

IF Experience == "good" THEN Position="Senior"

Now instead of executing the rules from top to bottom, they will be executed in priority order. The Priority 2 rules are executed first, followed by the Priority 1 rules and then the Priority 0 rules. The behavior of the RuleSet is as follows:

R09 is false, and does not execute.

R10 is true, which sets the Education property to "good." This will cause forward-chaining to re-evaluate any rules whose condition depends on the Education property; however, since no dependent rules appear prior to this one, execution proceeds to R11.

R11 is false, and does not execute.

R04 is false, and does not execute.

R05 is false, and does not execute.

R06 is true, which sets the Experience property to "fair." This will cause more forward-chaining to evaluate any rules whose condition depends on the Experience property; however, since no dependent rules appear prior to this one, execution just proceeds to R07.

R07 is false, and does not execute.

R08 is false, and does not execute.

R01 is false, and does not execute.

R02 is true, which sets the Position property to "Junior."

R03 is false, and does not execute.

At this point, every rule has been evaluated and every dependency has been satisfied. This priority-driven RuleSet provides the same result as the original RuleSet: the candidate's education is "good," his experience is "fair," and he will be given a "Junior" position. But this time it took only 11 rule evaluations and no rule was evaluated more than once.

Also, for this RuleSet, forward chaining is not required. The RuleSet could have been executed in priority order and achieved the same result. To signify this, the RuleSet's ChainingBehavior property can be set to RuleChainingBehavior.None. This setting will improve performance by skipping the dependency and side-effect computations. The priority then becomes extremely important, because if the rules are executed in any order other than priority order, they will not compute the correct result.

Job position level assignment tests results

The above RuleSet execution time was measured by running it in a loop 100 times using the RulesEngine class and with different full chaining and priority settings. The third column in the table below shows the average execution time of 100 rule set executions.

Full chaining(1) Priorities(2) Avg Execution time (ms)
Yes No 235.5
Yes Yes 192.7
No Yes 111.1

(1) Full chaining: The actions of one rule cause dependent rules to be reevaluated.

(2) Priorities: Rules have different priorities that govern the order in which they are evaluated.

Performance Case Studies

The following is a series of seven tests that shows some of the key performance characteristics of Windows Workflow Foundation. These characteristics are shown graphically to help illustrate the differences among each variable within each test.

The first deployment configuration in the Scenario Deployment Topology section (Figure 4) is being used for all these tests.

The objective for this series of tests is to show how different performance metrics, such as execution time, workflow persistence size and memory footprint change depending on several variables. The tests should help you decide between different modeling options that can have a big impact on the performance of your workflow application.

Workflow Execution Time versus number of CodeActivity activities

This test measures the workflow execution time when the number of activities is increased. The graphs below show the execution time of the CreateWorkflow and WorkflowInstance.Start methods with and without activity validation. It also shows that the workflow instance creation time is significantly reduced after the first execution of the same type. All assemblies used in this test are pre-jitted.

Click here for larger image

Figure 14. WorkflowInstance.Start execution time (click image to enlarge)

Click here for larger image

Figure 15. CreateWorkflow execution time (click image to enlarge)

Memory versus number of CodeActivity activities

This test measures memory cost with increasing number of CodeActivity activities showing a logarithmic growth in memory usage.

Click here for larger image

Figure 16. Peak Working set versus number of activities (click image to enlarge)

State versus number of ReplicatorActivity instances

This test measures the workflow serialized state size with an increasing number of ReplicatorActivity instances. The state size grows linearly with the number of ReplicatorActivity instances.

Click here for larger image

Figure 17. Workflow instance state size (click image to enlarge)

Workflow state size versus Serialization time

This test shows the relationship between workflow state and latency. The first graph shows quadratic growth in serialization/de-serialization and compression times. The de-serialization time is slightly greater than serialization time. The second graph shows Load and Unload times to a local SQL persistence database. The comparison between these two graphs shows that the persistence cost (that is, the cost of saving and loading the workflow from the database) is negligible compared to the serialization and de-serialization cost.

Click here for larger image

Figure 18. Workflow instance serialization time (click image to enlarge)

Click here for larger image

Figure 19. Workflow instance load time (click image to enlarge)

Activity Execution Context cloning cost

This test measures execution time of a workflow with a WhileActivity activity that runs a CodeActivity activity N times (where N=1 to 1001). We compare this test with a previous test where we executed sequentially N CodeActivity activities. The purpose of this test is to show the performance overhead of activity re-execution that involves AEC cloning compared to purely sequential execution.

Click here for larger image

Figure 20. Workflow Activity Execution Context cloning cost (click image to enlarge)

State & execution time versus ParallelActivity branches

This test measures the serialized state size and execution times with increasing number of ParallelActivity branches, each containing one CodeActivity activity. These results show quadratic growth in execution times with increasing number of ParallelActivity branches, while state size grows linearly with increasing number of ParallelActivity branches.

Click here for larger image

Figure 21. Workflow instance execution time versus ParallelActivity branches (click image to enlarge)

Click here for larger image

Figure 22. Workflow instance state size versus ParallelActivity branches (click image to enlarge)

Memory versus Workflow types

This test measures memory usage as the number of unique workflow types increases. The results show a linear growth in memory usage when the number of unique workflow types increases.

Click here for larger image

Figure 23. Peak Working Set versus number of unique workflow types in memory (click image to enlarge)

Component-level Performance

The following are component-level performance tests that should help you understand the performance characteristics of different workflow activities and modeling constructs. The test result numbers give quantitative insights and corroborate the qualitative assertions in the Major Performance Considerations section. Deployment configurations 1 and 2 in the Scenario Deployment Topology section (Figure 4 and 5) are being used for all these tests.

By showing you the relative performance of these individual components, this document aims to help you improve the performance of your particular workflow application.

Empty workflow

This test measures the execution throughput of an empty sequential workflow using the minimum default services and settings. This is the simplest workflow possible, and therefore it provides the throughput upper-limit bound for workflow execution.

Aa973808.workflowperfcharacteristics25(en-us,MSDN.10).jpg

Figure 24. Empty workflow designer view

Single CodeActivity workflow

This test measures the execution throughput of a workflow with a single CodeActivity activity. The CodeActivity is one of the simplest default activities that has a single dependency property called ExecuteCodeEvent. The CodeActivity.Execute method raises an event tied to a handler in the code-beside that immediately returns. This scenario shows the minimum execution overhead of adding a simple activity.

Aa973808.workflowperfcharacteristics26(en-us,MSDN.10).jpg

Figure 25. Single CodeActivity workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    public Workflow1()
    {
        InitializeComponent();
    }
    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
    {
    }
}

HandleExternalEventActivity workflow

This test measures the execution throughput of a workflow using the HandleExternalEventActivity activity to receive data. The code snippet below shows an interface decorated with the ExternalDataExchange, which is used to facilitate communication between a host application and the workflow instance.

The purpose of this test is to show the minimum performance overhead caused by the use of HandleExternalEventActivity activities.

Aa973808.workflowperfcharacteristics27(en-us,MSDN.10).jpg

Figure 26. HandleExternalEventActivity workflow designer view

The following is the code-beside code that is used in this test.

[Serializable]
public class MyEventArgs : ExternalDataEventArgs
{
    private Guid id;
    public Guid Id
    {
        get { return this.id; }
    }
    public MyEventArgs(Guid id)
            :base(id)
    {
        this.id = id;
    }
}
[ExternalDataExchange]
public interface IMyCustomInterface
{
    event EventHandler<MyEventArgs> MyReceive;
    void MySendOperation(Guid id);
}

CallExternalMethodActivity workflow

This test measures the execution throughput of a workflow using a CallExternalMethodActivity activity to send data to a local service. The activity uses the same communication interface defined in the previous test.

Aa973808.workflowperfcharacteristics28(en-us,MSDN.10).jpg

Figure 27. CallExternalMethodActivity workflow designer view

Web service publication workflow

This test measures the execution throughput of a basic workflow exposed as a Web service method using a WebServiceInputActivity activity and a WebServiceOutputActivity activity.

Aa973808.workflowperfcharacteristics29(en-us,MSDN.10).jpg

Figure 28. Web Service publication workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    private Random myRnd = new Random();

    public Workflow1()
    {
        InitializeComponent();
    }
    public Int32 myQuote = 0;

    public interface IWebServiceCommunications
    {
        Int32 GetStockQuote();
    }
    private void initialized(object sender, EventArgs e)
    {
        myQuote = myRnd.Next(100);
    }
}

Web service consumption workflow

This test measures the execution throughput of a basic workflow that calls a local Web service using an InvokeWebServiceActivity activity.

Aa973808.workflowperfcharacteristics30(en-us,MSDN.10).jpg

Figure 29. Web service consumption workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1: SequentialWorkflowActivity
{
    public Workflow1()
    {
        InitializeComponent();
    }
    public Int32 myStockQuote = default(System.Int32);
}

ReplicatorActivity workflow

This test measures the execution throughput of a workflow with a ReplicatorActivity activity executing five instances of a CodeActivity activity in parallel.

Aa973808.workflowperfcharacteristics31(en-us,MSDN.10).jpg

Figure 30. ReplicatorActivity workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    private Int32 nIterations = 0;
    public Workflow1()
    {
        InitializeComponent();
    }
    private void OnReplicatorInitialized(object sender, EventArgs e)
    {
        for (int j = 0; j < 5; j++)
        {
            ((ReplicatorActivity)sender).CurrentChildData.Add(j);
        }
    }
    public void code1_CodeHandler(object sender, EventArgs e)
    {
        nIterations++;
    }
}

ConditionActivityGroup workflow

This test measures the execution throughput of a workflow with a ConditionedActivityGroup executing a CodeActivity activity five times.

Aa973808.workflowperfcharacteristics32(en-us,MSDN.10).jpg

Figure 31. ConditionActivityGroup workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    private Int32 nIterations = 0;
    public Workflow1()
    {
        InitializeComponent();
    }
    public void CheckUntilCondition(object sender, ConditionalEventArgs e)
    {
        e.Result = (nIterations >= 5);
    }
    public void code1_CodeHandler(object sender, EventArgs e)
    {
        nIterations++;
    }
    public void CheckExecutionRestriction(object sender, ConditionalEventArgs e)
    {
        e.Result = true;
    }
}

Dynamic update workflow

This test measures the execution throughput of a dynamically updated workflow that runs five code activities sequentially.

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    public Workflow1()
    {
        InitializeComponent();
    }
    private void ApplyDynamicUpdate(object sender, EventArgs e)
    {
        WorkflowChanges tx = new WorkflowChanges(this);
        for  (int i = 1; i < 5; i++)
        {
            CodeActivity myCode = new CodeActivity();
            myCode.Name = "Code" + i.ToString();
            myCode.ExecuteCode += new EventHandler(myCodeHandler);
            tx.TransientWorkflow.Activities.Add(myCode);
        }
        this.ApplyWorkflowChanges(tx);
    }
    public void myCodeHandler(object sender, EventArgs e)
    {
    }
}

WhileActivity workflow with 1000 iterations

This test measures the execution throughput of a workflow containing a WhileActivity activity that iterates 1000 times.

Aa973808.workflowperfcharacteristics33(en-us,MSDN.10).jpg

Figure 32. WhileActivity workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    private Int32 myCount = 0;
    public Workflow1()
    {
        InitializeComponent();
    }
    public void checkCondition(object sender, ConditionalEventArgs e)
    {
        e.Result = (myCount < 1000);
    }
    public void code1_ExecuteCode(object sender, EventArgs e)
    {
        myCount++;
    }
}

CodeActivity workflow running a while loop with 1000 iterations

This test measures the execution throughput of a workflow containing a CodeActivity activity that executes a while loop with 1000 iterations.

Aa973808.workflowperfcharacteristics34(en-us,MSDN.10).jpg

Figure 33. CodeActivity workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    private Int32 nExecutions = 0;
    public Workflow1()
    {
        InitializeComponent();
    }
    public void whileLoop_CodeHandler(object sender, EventArgs e)
    {
        while (nExecutions < 1000)
        {
            nExecutions++;
        }
    }
}

PolicyActivity workflow

This test measures the execution throughput of a workflow executing a simple rule set within a PolicyActivity.

Aa973808.workflowperfcharacteristics35(en-us,MSDN.10).jpg

Figure 34. PolicyActivity workflow designer view

The following are the rule definitions used in this test.

Rule1:
Condition:
this.OrderValue > 500 && this.Customer == PolicyWorkflow.CustomerType.Residential
Then Actions:
this.Discount = 5
Settings:
Priority=0; Reevaluation=Never; Chaining=Full Chaining
Rule2:
Condition:
this.OrderValue > 10000 && this.Customer == PolicyWorkflow.CustomerType.Business
Then Actions:
this.Discount = 10
Settings:
Priority=0; Reevaluation=Never; Chaining=Full Chaining

TransactionalScopeActivity workflow

This test measures the execution throughput of a workflow with a TransactionScopeActivity activity that executes a CodeActivity activity. The purpose of this test is to show the minimum performance overhead caused by the use of TransactionScopeActivity activities.

Aa973808.workflowperfcharacteristics36(en-us,MSDN.10).jpg

Figure 35. TransactionalScopeActivity workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    public Workflow1()
    {
        InitializeComponent();
    }
    Int32 count = 0;
    private void code2_ExecuteCode(object sender, EventArgs e)
    {
        count++;
    }
}

State Machine workflow

This test measures the execution throughput of a State Machine workflow with five different states. The StateInitializationActivity activity, for all the states except the last one, cause a transition to the next state.

Aa973808.workflowperfcharacteristics37(en-us,MSDN.10).jpg

Figure 36. StateMachine workflow designer view

Sequential workflow with five CodeActivity activities

This test measures the execution throughput of a SequentialWorkflow with five CodeActivity activities bound to a single event handler. The purpose of this test is to compare the performance of a SequentialWorkflow with only sequential execution (no AEC cloning required) and the performance of a StateMachine workflow like the one in the previous section, where we also have five sequential transitions.

Aa973808.workflowperfcharacteristics38(en-us,MSDN.10).jpg

Figure 37. Sequential workflow designer view

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    public Workflow1()
    {
        InitializeComponent();
    }
    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
    {
    }
}

Sequential workflow simulating a State Machine workflow

This test measures the execution throughput of a SequentialWorkflow implementing a finite state machine with five different states. The five different states are executed sequentially, similarly to the two previous tests.

Aa973808.workflowperfcharacteristics39(en-us,MSDN.10).jpg

Figure 38. Sequential workflow simulating a State Machine workflow

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    public Workflow1()
    {
        InitializeComponent();
    }
    public Int16 state = 1;

    public void whileCondition(object sender, ConditionalEventArgs e)
    {
        e.Result = this.state <= 5;
    }
    public void code1_ExecuteCode(object sender, EventArgs e)
    {
        this.state = 2;
    }
    public void code2_ExecuteCode(object sender, EventArgs e)
   {
        this.state = 3;
    }
    public void code3_ExecuteCode(object sender, EventArgs e)
    {
        this.state = 4;
    }
    public void code4_ExecuteCode(object sender, EventArgs e)
    {
        this.state = 5;
    }
    public void code5_ExecuteCode(object sender, EventArgs e)
    {
        this.state = 6;
    }
    public void state1Condition(object sender, ConditionalEventArgs e)
    {
        e.Result = this.state == 1;
    }
    public void state2Condition(object sender, ConditionalEventArgs e)
    {
        e.Result = this.state == 2;
    }
    public void state3Condition(object sender, ConditionalEventArgs e)
    {
        e.Result = this.state == 3;
    }
    public void state4Condition(object sender, ConditionalEventArgs e)
    {
        e.Result = this.state == 4;
    }
    public void state5Condition(object sender, ConditionalEventArgs e)
    {
        e.Result = this.state == 5;
    }
}

Compensation workflow

This test measures the execution throughput of a workflow that executes compensation logic after an exception is thrown in the workflow.

Aa973808.workflowperfcharacteristics40(en-us,MSDN.10).jpg

Figure 39. Compensation workflow

The following is the code-beside code that is used in this test.

public sealed partial class Workflow1 : SequentialWorkflowActivity
{
    public int i = 0;
    // InvalidOperationException tied to throwActivity1
    public InvalidOperationException myException = new InvalidOperationException();
    public Workflow1()
    {
        InitializeComponent();
    }
    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
    {
        i = 1;
    }
    private void codeActivity2_ExecuteCode(object sender, EventArgs e)
    {
        i = 2;
    }
}

Performance test results

The following table provides the results for the tests described in this section.

Test Average Number of Workflows Completed per Second Average % Total CPU

(Workflow)

Empty Workflow 10308 99.3
Single CodeActivity Workflow 6547 99.9
HandleExternalEventActivity Workflow 3939 99.9
CallExternalMethodActivity Workflow 5884 99.96
Web Service Publication Workflow 1401 98.4
Web Service Consumption Workflow 1306 99.8
ReplicatorActivity Workflow 406 97.9
ConditionedActivityGroup Workflow 224 99.5
Dynamic Update Workflow 136 99.89
While activity with 1000 iterations 2.4 99.6
Code activity of while loop with 1000 iterations 6449 99.9
PolicyActivity Workflow 142 99.8
TransactionalScopeActivity Workflow 299 90.9
StateMachine Workflow 283 98.3
Sequential Workflow with five CodeActivity activities 2750 99.9
Sequential Workflow simulating a StateMachine Workflow 173 99.8
Compensation Workflow 210 96.7

Conclusion

The performance of a WF application depends on many different factors, including persistence pattern, activity complexity, tracking usage, and so on. There are important modeling approaches that should be considered to improve performance. The case studies, scenarios, and component-level tests described in this whitepaper showed you the main performance characteristics of WF and provided guidance and techniques that can be used to improve the performance of a particular workflow application.

See also

https://msdn.microsoft.com/workflow for more information about Windows Workflow Foundation.

 

About the author

Marc Mezquita is a Software Developer in Test with the Windows Workflow Foundation team at Microsoft Corporation, Redmond, WA. Since he joined Microsoft in 2000, he has been working on the development of various server products including Microsoft Commerce Server and Microsoft BizTalk Server.

Contributors

Many thanks to the following contributors and reviewers:

Paul Andrew, Paul Maybee, Don McCrady, Bernard Pham, Don Spencer, Joel West, and David Wrede.