Building a DSL for an Existing Framework

 

Antoine Savelkoul and Dennis Mulder

Avanade

March 2007

Applies to:
   Microsoft Visual Studio 2005
   Service-Oriented Architecture

Summary: This paper describes the benefits of using a software-factory approach to improve the capabilities of Avanade's existing ACA.NET framework. These benefits will be shown by stepping through the development and use of a domain-specific modeling asset used within the context of a software-factory viewpoint. (18 printed pages)

Contents

Introduction
What Is a Software Factory?
The Microsoft SOA Platform
ACA.NET Project Goals
Organizing the Project Team
The Development Process
A DSL for Service-Oriented Systems
Defining the Domain Model
Visualizing the DSL
Adding Features to the Designer
Modeling Target Artifacts
Code Generation and Custom Code Preservation
Testing and Assessing the Result
Conclusion
Resources
Collaborators

Introduction

One of Avanade's strengths is the development and use of assets that accelerate the implementation of the solutions we develop for our customers. This family of assets is called the Avanade Connected Architectures (ACA). Assets targeted for .NET platform are supported within the Execution Architecture Framework called ACA.NET. ACA.NET is a technical framework that extends a co-developed Microsoft asset called Enterprise Library. One of ACA.NET's core differentiators is the ability to accelerate the implementation of custom .NET solutions based on a service-oriented architecture (SOA). This paper illustrates Avanade's continuing work to improve upon ACA.NET's ability to automate and configure SOA solutions using reusable assets.

Avanade is using these assets to build several types of solutions for its customers, an example being solutions in the domain of service-oriented OLTP (Online Transaction Processing) applications.

This paper touches on some important learnings from the implementation of a modeling asset used to configure and produce SOA solutions more efficiently. This modeling asset is based on Microsoft's Domain-Specific Language (DSL) Toolkit. To ensure this DSL asset and other ACA.NET assets work effectively together, this asset is defined within an architecture pattern for facilitating software industrialization. Microsoft has named this pattern a Software Factory.

A DSL asset must be developed by a team of people with sufficient experience in the domain to understand the key modeling abstractions. In this case, this is the service-oriented OLTP applications domain, a domain with which we as Systems Integrators are familiar.

Before diving into our solution, let's first briefly expand on what a Software Factory is and how SOA is positioned within .NETs Connect Systems. Knowledge of these key concepts will help you understand our efforts.

What Is a Software Factory?

A software factory is a development environment configured to produce a specific type of software product quickly and efficiently. The factory converts a general purpose development environment such as Microsoft Visual Studio 2005 into a specialized development environment with tooling and guidance to rapidly develop a wide range of similar software products.

The current generation of Software Factories being developed by the Microsoft Patterns & Practices team represents the first step along a journey towards a broader vision of Software Factories. This vision, which is described in several books and whitepapers, describes elements that have not been fully implemented in the current p&p factories, including a schema definition, integrated designers, and support for "composability" and multiple viewpoints. Over time, we expect the Patterns & Practices Software Factories to progress further towards the broader factories vision, driven by customer feedbacks and platform advances.

The Patterns & Practices and VSTS Architect teams are working together to build a new factory runtime and authoring tools that will unify concepts from GAT and DSL tools, as well as provide a number of new features. As soon as the new runtime is available, you can expect the new generation of Patterns and Practices factories to leverage all of these concepts.

Avanade is currently working to align our own assets with this new runtime. As said, the asset we are describing in this paper addresses Service-Oriented Solutions. Key to this discussion is how Microsoft defines and supports SOA. Here is a very short summary on the topic.

The Microsoft SOA Platform

Service-Oriented Architecture (SOA) is viewed by Microsoft as addressing only one aspect of the larger problem: addressing the pains that customers have connecting and using disparate information systems. It is a design approach to organizing existing IT assets, such that the heterogeneous array of distributed, complex systems and applications can be transformed into a network of integrated, simplified, and highly flexible resources. Service-Oriented Architecture is built into every aspect of the Microsoft technology stack, from the developer tools that build Web Services such as .NET, to server products.

Software Factories can be used to support SOA by providing a specialized development environment with tooling and guidance to rapidly develop a wide range of SOA and other common types of solutions.

ACA.NET Project Goals

After the domain model has been "sketched in" our goals are to improve the typical project life cycle of a service-oriented solution by:

  • Using code-generation assets to increase productivity.
  • Increasing quality by reducing custom code.
  • Providing agility through systematic change processes.
  • Enabling traceability to requirements.
  • Freeing developers from routine "boilerplate" tasks.

ACA.NET is already used as a common foundation across most of our projects. However, we wanted to further improve upon ACA.NET's ability to make our implementations more consistent, of higher quality, and with faster delivery. We have done this by creating an ACA.NET Domain-Specific Language that makes it easier to configure and generate code. This DSL is built around a single service-oriented viewpoint that uses a single visual modeling asset based on the Domain-Specific Language Toolkit. The asset is just a small piece of a software factory we are currently developing to make the development of service-oriented solutions easier. This paper is not so much about the details of our ACAs as it is about the process that we took to define this domain-specific language. In addition to that, we share our thoughts on possible next steps.

Using a model-driven approach requires more than just creating a visual representation that can generate code. Careful understanding and capturing the language and metadata able to represent the intended domain are important. Proven guidance should also be provided about working within the domain, and the tooling must enact that guidance within the right context. Furthermore, all activities and roles supporting the use of the modeling asset must be captured with their respective viewpoints. Additionally, associations between assets must be formalized, as do the relationships between factories themselves.

We are at the beginning stages in an effort that will evolve into more and improved Software Factories, each of which may have more viewpoints, activities, assets, and the like. Today, our Software Factory efforts are focused on creating and using a Domain-Specific Language modeling asset predicated upon a single OLTP viewpoint.

This effort will help Avanade to align Avanade's existing investments in assets ready for fuller software-factory adoption. This effort will further help Avanade prepare for both horizontal and even better vertical solutions that will give Avanade greater competitive advantage to deliver even more value to our customers then exists today.

The remainder of the paper is focused on the question of how to use the DSL Tools to create this modeling asset that will facilitate the automatic generation of code for our viewpoint in the service-oriented OLTP application domain.

This paper can be read without any specific knowledge about the DSL Tools or deep understanding of Software Factories. This paper will help you with the approach to take when you want to develop a DSL asset. To learn more details about the actual DSL implementation, please refer to the documentation provided with the DSL Tools.

"The software-development industry is still relatively young. Software factories and specifically domain-specific languages can help the industry in maturing. In potential, it can be to C# what C is to assembler: less programming, more configuration."

–Edwin Jongsma, Practice Director Software Development

Organizing the Project Team

Unless you want to build a very simple DSL, you will probably have to work on your asset as a team. How should this team be organized? This answer can be given when we take a look at the goal of the building and using the DSL asset. This asset should be applied to a specific problem domain. The goal of the DSL is to be more productive by generating a larger portion of the family of applications that will be built. Defining the architecture for a single application is a difficult job, but exploiting an asset such as a DSL implies you define one for a whole product family. If you want the DSL to be broadly applicable to the generated products, it's important that the target domain and the target architecture be clearly defined before you begin your DSL project.

It is important for the success of the project that people with the right knowledge and skills are involved. A software architect closely familiar with the domain and target application architecture is a critical resource to guide the development of the DSL. The DSL asset itself can be built by senior developers with architect-skills who have expertise in the domain of the DSL and the experience to develop generic software.

The Development Process

Before building anything, the domain of the software factory has to be clearly defined. Make sure that all team members know what parts of the target architecture are in scope for code generation and which parts are not. In addition to that, decisions have to be made about how specific viewpoints of the software factory are assisted with assets. Existing frameworks usually only cover specific viewpoints in the software factory. Sometimes, it is necessary to provide additional abstraction in the use of these frameworks by creating DSLs for them. In many situations, writing a DSL to automate the use of an existing framework will make implementation easier because the level of abstraction is higher, and the quantity and variation of generated code is less. It is important that you have a clear understanding of how frameworks and DSLs relate to viewpoints in your software factory and how viewpoints relate to each other. For example you might use a data tier generation framework that you use to generate code based on a model built with a DSL.

Before starting to build a DSL we suggest that you first create a target application. The target application helps to define what you want to generate. This target application will contain most of the functionality that can be generated by the DSL that you want to develop. The functionality to generate code can be added to the DSL project in small portions that can be realized in a relative short amount of time. By constantly referring back to your target application you will be able reach your end goal by taking several small steps. This way of working will divide the development into iterations, as we know from modern software development methodologies like the Microsoft Solutions Framework.

Developing a DSL is like any other software development project, you cannot know everything upfront. Taking an iterative approach and allowing change during the life cycle of your project is critical to the success of your DSL. Visual Studio Team System is a modern software life-cycle platform that supports you with both the process guidance and tooling to build successful software projects, for example a Domain-Specific Language project.

In the next sections, the domain of the DSL and the building process of the DSL will be discussed.

The iterations of this process can be divided into the following parts:

  • Defining/refining the domain model
  • Adding features to the designer, such as validations and imports
  • Modeling target artifacts
  • Writing the generation code
  • Testing and assessing the result

Depending on the stage of the project in which you are, you will not work on every step that is mentioned in each iteration. Instead, you have to take an "agile" approach and be pragmatic on what is the most important task to work on.

A DSL for Service-Oriented Systems

To introduce you to the possibilities of the DSL Tools, this paper will use our experience of creating a DSL which you can use to define the structure of a service-oriented system OLTP application as an example. Service-orientation is a business driven strategy that defines business functionality in terms of loosely coupled autonomous business systems (or services) that exchange information based on messages. Service orientation has received a lot of attention lately and has been supported by our ACA.NET asset for several years already. Adding the ability to model these business services empowers business owners to help create the technical services and is where the DSL tools fit very nicely.

Although the Distributed System Designer that comes with Visual Studio 2005 Team Architect can also be called a DSL for service orientation because it uses a graphical language for modeling service-oriented systems, we believe that an even better result in improving productivity can be obtained by generating more code. The Distributed System Designer only generates some Visual Studio Solutions using traditional ASP.NET Web Services. The main goal of the Distributed System Designer is to validate the application architecture against the application infrastructure. In this regard it is more focused on the Infrastructure Architects viewpoint, rather then the application architects. It was our aim to gain more experience in developing a DSL for service-oriented systems that is powered by concepts and investments from our existing assets for application development, rather then infrastructure.

For several years, we have been actively implementing solutions using service-oriented architecture (SOA). Unfortunately it has taken several years for SOA standards and SOA frameworks to evolve. With the advent release of Windows Communication Foundation (WCF) the choice to select this platform as the target distributed platform for our service-oriented DSL is easy. WCF is part of the next generation .NET Framework, .NET version 3.0. WCF delivers a set of .NET technologies covering a large part of distributed system capabilities like architecture, messages, transport and security. WCF unifies all the older distributed technologies we know from the Microsoft Platform (COM+, Enterprise Services, MSMQ, .NET remoting, WSE, and ASMX) into one programming model. WCF enables you to develop service-oriented systems based on open standards while preserving existing investments.

WCF describes services as contracts. There are two flavors of contracts. Service contracts are there to define the operations that a service supports. Data contracts describe the structure of the messages that you can send to or receive from the service. These contracts are standards based and can be used on any platform, using any technology. Because of the high abstraction level, these contracts have a simple structure. The user does not have to understand how messages are transmitted between applications and what the results look like. The presence of this abstraction will make it easier for us to develop a strong DSL.

Next, you will learn how we fitted WCF into our DSL asset.

Defining the Domain Model

When you have a clear vision of which parts of the applications you want to generate, you can start to define the similarities and relationships between them. Essentially this is about which viewpoints require additional tools in order to reach your productivity improvements goals. These specifications can be translated into a domain model containing a definition of the elements and relations within the DSL.

In the previous section, we mentioned contracts as potential elements in our DSL. There is a clear distinction between service contracts and data contracts so these will likely become separate elements. Service contracts define operations and data contracts describe the messages

The DSL Tools allow you to define your domain concepts, such as the contracts, in the domain model. The definition of these concepts is defined in a graphical environment, as you can see in Figure 1. The ServiceModel is the parent node of your model. Under the parent node, you can define child nodes, and child nodes also can have child nodes themselves. When developing service-oriented systems, it is a best practice to define your contracts first, independent of the actual applications that you are building with these contracts. Contracts define the service interfaces between services and applications without any implementation-specific details. Therefore, we define applications, service contracts, and data contracts as unrelated elements in our model, as you can see in Figure 1.

Bb381702.blddslexfrm01(en-US,VS.80).gif

Figure 1. Basic elements of the model

Let's start with the service contracts. Service contracts define the operations that services expose. The user of the DSL needs the ability to define these operations unless the user wants to consume an external service. Because we only want to generate the implementation code for the services, it's not necessary to load the complete definitions of external services into the model. This would be necessary only when we want to generate some business logic consuming the service. Another difference between internal and external services is the relations that they can have within the model. External services can only be consumed by applications within the model, while internal contracts can be hosted and consumed. We can make these two implementations, InternalService and ExternalService of the ServiceContract element, by using inheritance.

It should be possible to assign a return type to an operation and its attributes. These types could be native types, types defined somewhere else or data contracts defined within the model. It should be possible to implement all of them into the DSL. The use of a data contract should be made possible by a reference. We chose not to do this, because it makes the model too complex for our users. In this case, the easier solution is the best option. We chose a string attribute that refers to the type. By using the validation capabilities of the DSL Tools we can validate the types before code generation. The result is shown in Figure 2.

Click here for larger image

Figure 2. Definition of the service contract (Click on the picture for a larger image)

The data-contract element is easier. A data contract has one child element to define data members. These data members have an attribute "type" of the type string. Similarly to the service contracts, the type attribute is a string. The result is shown in Figure 3.

Bb381702.blddslexfrm03(en-US,VS.80).gif

Figure 3. Definition of the data contract

The application element will represent a Visual Studio project that will eventually be generated. Because we'd also like the generated applications to host the services automatically, we must know more about the architecture of the application. As an example, we will support console applications and forms applications. We can solve this difference by adding an enumeration to the application element. That would be less practical if we want to add some specific properties to both types of applications. In Figure 4, this is made possible by using inheritance.

Bb381702.blddslexfrm04(en-US,VS.80).gif

Figure 4. Definition of the application

We also named some relations. One of them, the use of data contracts, has already been implemented by using strings. We still have to define the relations between applications and services. An application can host and/or consume multiple services. External services can only be consumed. Now we can make a reference to the ServiceContract element for the consuming relation and an individual relation to the internal service contract for the hosting relation, but that might be confusing and lead to problems when adding a type of service contract that can only be hosted. In this case, the most consistent way is to define all relations for each type of service contract individually, as shown in Figure 5.

Bb381702.blddslexfrm05(en-US,VS.80).gif

Figure 5. References between applications and services

Relations also have constraints. To compile the project, a service will not necessarily be connected to an application. So, the minimum for the service to application direction of the relationship will be 0. There will only be a maximum of 1 for the relation used to indicate that a service is hosted by a specific application. In the other direction, an application can host and consume 0 or more services.

Constraints can also be defined on the embedded relation constraints. A choice was made to require a parent for all already defined relations. For example, it is not possible to compile while there is a data member that does not belong to a data contract.

This example was made with the following steps in mind:

  • How will it be visualized?
  • Do the DSL Tools support everything you have in mind? If not, could it be realized by custom code?
  • What will the code generation look like?

We already had a quite clear view of our elements and their relations. It could help to make sketches of how the DSL will look to the user and have a look at how they can be represented in the domain model.

When building a more complex DSL, it is likely you will want to do something that can not be visualized or for which the code can't be generated by using code templates. This will not mean that it can't be realized. Solutions made by the DSL Designer are highly customizable.

We learned that you should prevent n:n relations of elements and code. An example can be the description of links between classes. Each link has an exposing and a consuming class, while each class could have more links. During code generation, you have to walk through all link definitions multiple times searching for definitions needed for a class not yet generated. This and other code-generation issues will be discussed later. During the definition of your domain model, you will have to keep in mind that decisions there influence code generation and can make code generation overly complex.

A good balance between these three considerations will be important. Excluding the possibility for customizations will save you time, but it can make your DSL useless for the user. However, making the model as easy to work with as possible can take a lot of time and could make code generation more difficult.

In this section, you have learned how to move from patterns in a framework to a model. You learned when and how to use all the types of relations supported by the DSL tools and the types of relations that are embedded, references, and inheritance relations. You also learned how alternatives such as validation can be used to prevent the model from getting overly complex. In the next section, we will discuss the visualization of the DSL.

Visualizing the DSL

Defining the shapes within the DSL Tools is possible after defining the domain model, but this doesn't mean you couldn't start with it earlier. When you started from a user perspective, a lot of work is probably already done in the form of sketches used to check if your users understand the concept. Keep in mind that in the case of complex DSLs, the visualization of the DSL is likely to change a lot during the project to make the domain model and code generation less complex.

As an example, we take a simple OrderEntry application. The OrderEntry application is a service-oriented application that has one operation called PlaceOrder. The PlaceOrder operation requires an OrderItem as one of its inputs. We consider the following elements as part of this scenario:

  • A client application that consumes the service
  • An OrderService with the PlaceOrder operation
  • An application that can host the service
  • A data contract that represents the OrderItem

We have defined three basic types in the root of the domain model (application, service contract and data contract). Two of them (application and data contracts) have subclasses. Visualizing the data contracts would not be a problem. The DSL Tools come with a shape called a "compartment shape." This shape has the possibility to show member elements into a list within the shape. In this case it could be used to show the data contracts with its data members in one shape.

For the applications, we must be able to distinguish two types. Giving both elements another color will confuse the user of the DSL. Colors are already used to distinguish data contracts, service contracts and applications from each other. Another possibility could be the use of icons to indicate the difference. Figure 6 will show the effect of this change.

Click here for larger image

Figure 6. Domain model before and after customization (Click on the picture for a larger image)

The internal services which can have operations with parameters are a lot more difficult. Using compartment shapes is an option, but constrained by a maximum depth of one level. This means that with the standard shape types, we have to create a separate compartment shape for the operations. This could dramatically increase the number of shapes in a model created with the DSL. With an average of five operations per service, six times as many shapes are placed in a model to describe the operations. There are sixty shapes in a model with ten services. How can we improve this?

We could choose a compartment shape for the services. The operations will be part of the compartment as shown in Figure 7. The list of parameters would be shown when the user asks for it. This could be done by showing a grid in a pop-up window. We will have a look at this customization in the next section.

Bb381702.blddslexfrm07(en-US,VS.80).gif

Figure 7. Compartment shape representing a service

You have to ask yourself some important questions during the visualization process:

  • Will the number of elements in the toolbox be acceptable?
  • Will models made with the DSL contain too many elements to fit in one drawing?
  • Will the number of properties be acceptable?
  • Are the descriptions easy to understand by the user?

It is likely that a revision of the domain model is necessary when you answer "no" to one or more of these questions. For example, when we want to define 10 different applications and we use inheritance, the toolbox will become quite busy. In that case you should consider using enumerations instead of subclasses. Application-specific properties can then be saved in specific elements that are not shown in the graphical model. Editing these properties can be done by a custom pop-up window or customized property pane when the user clicks on elements in the model. Some types of customizations will be discussed in the next section.

In this chapter, we have discussed the visualization of the domain model. You have learned how visualization can influence the domain model and how to prevent models made with the DSL from getting too unwieldy.

In the next section, you will learn how you can extend your DSL outside of the graphical design surface.

Adding Features to the Designer

After completing the visualization within the DSL Tools, the DSL is useable. You can drag and drop elements on the surface and connect them. But that may not be enough. Maybe you want to prevent the user from creating some creative designs that will lead into incorrect code generation, or perhaps you want to make some non-visualized parts of the DSL accessible to the user.

Validation will prevent the user from generating the code for a model, which can lead to problems during compilation. For example, you don't want to give the user the ability to define a data structure which has a required member of its own type. You can define these exceptions by using validation on events like opening and saving the model, via a context menu and/or by handling custom events. The events on which you want to apply validation can be defined per element type. The constraints can be implemented by adding custom code to the shapes. The DSL Tools use partial classes to separate this logic in separate files from other pieces of the DSL. In the case of data contracts, the constraint method will walk through all members of the element to see if there is no required member defined with the same name. When this validation is fired the user can be notified that this is not allowed in the DSL.

Earlier, we introduced an external service contract element. Let's have a look at how to implement this import functionality.

First, we will have a look at what to implement. To be able to generate the connection with the external service, we need some extra information. This information can be gathered from the service endpoint, so we have to ask the user what the uri of the service endpoint is. We can simply store this uri in a string property of the external service element. This uri will then be used during code generation to gather required information from the endpoint through the retrieval of a Web Service Description file based on the Web Service Description Language (WSDL), which is the official standard for describing both the operations that services have and the messages that they exchange. Of course, this means that the service must be reachable from each machine where the code is generated. This may not be the most ideal solution.

In some situations, it is necessary to store a copy of external data within the model. In case of the external service, we can add an import option to the context menu of the external service element. When clicking, it will ask for the uri of the service endpoint. From this endpoint, the service definitions can be read and stored in a file that is used during code generation. SvcUtil, a WCF tool to generate proxies to consume services, can be used in this process. An overview of the process is shown in Figure 8.

Click here for larger image

Figure 8. Handling of external services (Click on the picture for a larger image)

As you learned in the previous example, it is possible to change values of elements through custom code. In fact, you have full access to the model through the DSL Tools API in much the same way when walking through the model in customizations like validation or code generation. This will enable us to add alternative ways of mutating the model. As we discussed before, we want to add a pop-up box containing a grid that can be used to add the parameters of a service operation.

A data grid can be placed into a window. By creating and showing this window under a mouse-click event or context menu of the operation element, the user will be able to access it. When the window is shown, it will walk through the operation parameter elements and add them to the grid, the data from the grid will be merged with the data in the model. The result will be a pop-up window, as shown in Figure 9.

Bb381702.blddslexfrm09(en-US,VS.80).gif

Figure 9. Grid showing parameters of an operation

In this section, you learned how you can apply validation, integrate existing tools and edit the model outside the design surface. You can integrate anything from simple Windows Forms to command line applications into your DSL by adding custom code and using the API of both the DSL Tools and Visual Studio. Next you will learn which artifacts we can distinguish that will have to be generated.

Modeling Target Artifacts

The way you defined your domain model influences your code generation more then anything else. It's important to have a clear vision of relations between the target artifacts and the model. In this section, we will discuss how to model the target artifacts and the relations among the domain model.

In the DSL, we talk about elements and relations. Somehow, they have to be transformed into projects, classes and other types of artifacts. When we are talking about elements, we mean the element with its children. Those children are used only by the parent element. For example, a data contract has data members. We could define the following types of relations:

1:1 An element can describe data that will be used once to generate code. The generated code will be placed at a predefined place. In the example, this is the case with the application elements. Another example could be classes used for implementing business logic within a specific application.
1:n An element can generate code with similarities over multiple places. In the example, this is the case with data and service contracts. Another example is interfaces shared by multiple classes.
n:1 It is also possible that a number of elements depending on each other by references are used to generate one class. This could be the case when more elements describe one class (e.g. one element describing the structure another adding extra functionality like drawing code).
n:n It will become difficult when you are trying to describe n:n relations. An example could be the description of links between classes. Each link has an exposing and a consuming class, while each class can have more links. During code generation, you have to walk through all link definitions multiple times searching for definitions needed for a class that is not generated yet. A more practical solution in that case would be an element representing an interface and an element representing a class implementing those interfaces and connecting to other classes. To prevent the models created using the DSL from getting too messy, you can think about merging elements with the same name during code generation.

In our DSL for service-oriented systems, we have several artifacts distributed over several projects. Each project will represent an application. Artifacts in our DSL are:

  • The project files.
  • Application classes.
  • Service classes to expose services.
  • Proxy classes to consume services.
  • Data contracts.

Let's have a look at the artifacts and what their relations are to the domain model. For each application in a model a project file will be generated. To be able to run an application, an application class is needed. They are all specific to one project and will be generated once. This is what we call a one to one (1:1) relation from model to code.

For each service contract defined, service classes and proxy classes have to be generated for each application exposing or consuming the service. These code artifacts share the operation definitions of the model. This is what we call a one-to-many (1:n) relation. This is also the case with the data contracts.

Figure 10 shows the representation of elements into code of the service DSL. You can easily recognize the 1:1 and 1:n relations.

Click here for larger image

Figure 10. Elements into code representation of the service DSL (Click on the picture for a larger image)

In this section, you learned how to model target artifacts and how they relate to the domain model. In the next section we will take a look at the generation of these artifacts based on the model.

Code Generation and Custom Code Preservation

When we know how the elements will be represented into code and other artifacts, the next questions you should ask yourself are how to generate them and how the user will be able to use the generated code. This question will be answered in this section.

An important aspect of code generation is how you want the code to be generated. There are several approaches to code generation:

  • Templates
  • CodeDom and other DOM types
  • Existing code generators

The easiest approach towards code generation is using templates. This encompasses writing all of the code and reading from the domain model inside of placeholders in the template. The approach is similar as that of some scripting environments, like for example Active Server Pages. The following code sample shows a template which walks trough the data contract definitions within the model and generates a code file containing the WCF code for them. The sample is generating a data contract. Therefore, it is iterating over data-contract elements in our model and generating data members for each field in the data contract.

<#@ template debug="true" 
inherits="Microsoft.VisualStudio.TextTemplating.VSHost
.ModelingTextTransformation" #>
<#@ output extension=".cs" #>
<#@ ServiceFactorySolution processor="ServiceFactoryDirectiveProcessor" 
requires="fileName='$CurrentModelFileNamePlaceHolder$'" 
provides="ServiceFactorySolution=ServiceFactorySolution"#>

using System;
using System.Runtime.Serialization;

namespace <#=ServiceFactorySolution.Name #>.DataContracts
{


<#
   foreach(DataContract dataContract in ServiceFactorySolution.DataContracts)
   {
#>
   [DataContract]
   public class <#= dc.Name #>
   {
<# 
      foreach(DataMember dataMember in dataContract.DataMembers)
      { 
#>      [DataMember]
      public <#= dataMember.Type + " " + dataMember.Name #>;
<#
      }
#>
   }

}

CodeDom gives more flexibility and control over the code. You can write .NET code that will generate the code by walking through the model. This form of generation has the advantage of reuse and more structure, but it is harder to implement.

In this document, an example of importing external services by using SvcUtil was given. WSDL files are stored together with the model and reused by calling SvcUtil to generate the proxies for the external services. This can be done by using the command-line options of SvcUtil. There will also be situations where information from the model is used by existing code generators. When the code generator accepts files in a specified format, these files can be generated by using templates or CodeDom. There is nothing blocking you from using your favorite code generator to generate code for your DSL. A popular code generator on the Microsoft platform is CodeSmith (see Resources for a link to their site).

Service classes are important artifacts that are generated by our DSL. From the model, we can use the service element to generate service classes with empty methods. This is simple, but the user of the DSL has to modify the generated classes to be able to implement the business logic. This will also mean that the custom code will be overwritten the next time the code generation is started. This, in turn, means that the generated code must know how it can call the custom code. The user should not have to touch the generated code.

There are several ways to enable the user to implement its custom code. Some of them are the following:

  • Partial classes
  • Inheritance
  • Interfaces
  • Preserved code regeneration on a class

Partial classes allow you to separate generated code from custom code in two different files. Together they make up the complete class. The compiler merges these during compilation. This concept is used a lot in Visual Studio.NET. The generated code can make calls to non-existing members of itself that should be implemented in a partial class that contains the custom code. This is a straight forward method with a lot of flexibility. One disadvantage is that the user will get compiler errors on the generated class when methods are not implemented yet in the custom code.

Inheritance is another option where the base class is generated and the subclass is available for custom code. The advantages are that this will cause code completion and compiler errors after the generation, so that you know what to do.

Interfaces are similar to inheritance. The code generator could generate an interface that is then implemented by custom classes. When the generated code knows where to find these classes, it also knows how to use them.

You have already learned that letting the user write its custom code into a generated class is not an option because it will be overwritten. But it could be made possible when the code generator knows which code pieces can be overwritten and which pieces shouldn't be touched. This form of preserved code regeneration could be realized by adding guides to the methods identifying which method body belongs to which element.

In this section, you learned which options you have to generate the code artifacts. You also learned how to do this in a way that the DSL user will not have to touch the code that has been generated, so that regeneration is easier to accomplish.

Testing and Assessing the Result

Parts of the target application can be modeled and generated with the DSL after each iteration. You can assess the quality of the DSL by comparing the generated application with the target application. This doesn't necessarily have to be done by domain experts if the target application was already defined by them. This approach reduces the (valuable) amount of time needed from these experts. Standard testing mechanisms like unit testing from Visual Studio Team System can be used to automate testing. It would be even better if the code-generation mechanism already generates stubbed-out unit tests from the DSL; in that way, a DSL also reduces rote tasks for testing.

If you defined a target application, you are able to compare that with the generated code; unfortunately, it can never contain all variations needed in the problem domain of the DSL. Collaborate with domain experts to make sure that all variations are supported and that all types of applications for the problem domain can be built. You could also choose to start using the DSL on projects for which the DSL is built. This is also essential when taking the "agile" approach, working with the DSL on real projects will provide valuable feedback. Of course the decision to start using the DSL depends on the kind of project on which it is going to be used and the possible value that the DSL can provide after the iteration.

Conclusion

Software factories represent a great pattern to reach software-development industrialization. Systems Integrators, like Avanade, that have invested in the development of assets are in a good starting position to use these assets to create a software factory based upon them. These assets form the ingredients of the software factory in order to benefit more of these existing assets by combining them in a software factory.

Usually, these frameworks, guidance, best practices, tools, wizards, snippets, and so forth fit in a certain area in a software factory. In this paper, we looked at defining a DSL asset that fits well in a Software Factory for Service-Oriented Online Transaction Processing–type applications. The DSL asset can help you to increase productivity even more. It is clear that the problem domain described in here may need help from other assets, not just this DSL asset. While we did focus on DSLs you should be aware that DSLs are not always the best solution.

A DSL asset can help industrialize software development as a whole. You can use the DSL Tools to automate the implementation of your assets. In contrast to a lot of other available solutions, you can define your own DSL and have full control over the code-generation process. The DSL Tools have an open API, so that you can overcome limitations through customization.

When you apply your DSL, you can improve the whole project life cycle in several ways:

  • Increased productivity by using code generation—All project files and necessary code are generated for the DSL user. The applications can be compiled directly and will host the services we defined. And that without writing one line of code!
  • Increased quality by reducing custom code—The basic architecture of the application and the artifacts can be generated. The generated code is already proven and the user can add custom code through extension points.
  • Providing agility through systematic change processes—By separating generated code from custom code, the model can be used during the whole project life cycle to make changes to the code, while preserving custom code.
  • Enabling traceability to requirements—The DSL model allows others to get insight into the application without diving into the code. Models say more then a thousand words.
  • Freeing developers from routine "boilerplate" tasks—Tasks like setting up projects, writing application classes and generating proxies can all be done automatically by the DSL. The user configures, the DSL generates code. In a lot of scenarios, existing code generators can be integrated into the DSL, too.

We defined a complete approach to build a DSL based on WCF. But what if you have a project in which you want to use other technologies in the same domain? No problem; the model described was created to add an extra abstraction level. In a lot of situations, you only have to extend the code generation and make minor modifications to the domain model itself. This will enable the DSL users to adapt quickly without the need to learn new technologies.

Perhaps, there will be many more scenarios in which you can see DSLs creating more value for money. All the potential savings make it possible for you to pay off the investment of the DSL in the form of higher productivity, more control over the development process, and higher quality.

We are convinced of the power of DSLs and are currently working on the next generations of our DSLs, as well as looking to how this fits in a Software Factory—all of this, in order to help our consultants enjoy greater productivity and to improve the solutions we develop for our customers, in terms of productivity, quality, and costs.

Resources

Web Sites

Software Factories

Domain-Specific Language Tools (Visual Studio 2005)

Visual Studio Team System Developer Center

Guidance Automation Toolkit (an extension to Visual Studio 2005)

Product Web Site of CodeSmith (a template-based code generator)

Documentation

DSL Walkthroughs

Visual Studio 2005 SDK Help Documentation

Books

Greenfield, Jack, and Keith Short, with Steve Cook and Stuart Kent. Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools. West Sussex, England: John Wiley & Sons, Ltd., 2004.

Collaborators

Many thanks to our collaborators, sponsors, and reviewers:

Edwin Jongsma, Matt Joe, Kyle Huntley, Gerben van Loon, Erik Gunvaldson, and Jack Greenfield.

 

About the authors

Dennis Mulder started his career in 1997, choosing to dedicate himself to Microsoft technology. In August 2004, he started to work for Avanade, a Microsoft and Accenture joint venture. Currently, he is interested in a few areas of the Microsoft platform, specifically service orientation, integration, and software factories. As a consultant based in the Netherlands, Dennis works with enterprise customers to solve their challenges by leveraging the power of the Microsoft platform. Dennis is a frequent speaker at Dutch Microsoft conferences and user groups, and has become an INETA speaker as of early 2006. Dennis is coauthor of a book about Windows Communication Foundation called Pro WCF: Practical Microsoft SOA implementation. The book is published by Apress and is due out in the beginning of 2007. Dennis can be reached at dennism@avanade.com or through his blog at https://www.dennismulder.net.

Antoine Savelkoul is a consultant at Avanade, a joint venture of Accenture and Microsoft. Antoine started developing applications for the Microsoft platform in 2001. With the introduction of Visual Studio 2002, he started to use the .NET platform from the beginning of its existence. Driven by his passion for technology, Antoine gained experience around a broad range of Microsoft technologies targeted at business solutions, as well as consumer products. With the new paradigm shift in software development called Software Factories, Antoine's attention was triggered. He has been working with the paradigm for almost 2 years now. He has been involved in the development of DSLs at Avanade for almost a year. Antoine can be reached at antoines@avanade.com or through his blog at https://www.savelkoul.net.

© Microsoft Corporation. All rights reserved.