Section 1: Introducing the Architect

 

Brian Travis
Architag International Corporation

November 2003

Applies to:
    Microsoft® Visual Studio® .NET Enterprise Architect Edition

Summary: Learn how to design and develop distributed applications. First in a six-part project. (42 printed pages)

To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.

Contents

Purpose
A Pyramid of Architects
An Architect Designs Buildings, Right?
Project Development Phases
Roles, Tasks, and Tools
FoodMovers Scenario
Design Concepts
Conclusion

Microsoft® Visual Studio® .NET Enterprise Architect Edition (VSEA) introduces the concept of the "enterprise architect." This is a person in an organization who is responsible for designing, building, and deploying enterprise-scale systems.

This is a six-section project focusing on design and development of a distributed application. The company, a fictitious enterprise called FoodMovers Distribution Company, is a distributor of grocery items for small and medium-size grocery stores. I chose a distribution company because of the opportunities for cross-enterprise application design and development. I hope you find helpful information that is applicable to your situation.

Purpose

The purpose of this project is to introduce you to the concepts of architecture in an enterprise-level implementation. If you are in an enterprise, you might already know about these concepts. After that, I will use the concepts of service-oriented development and re-usable objects to build a system with Microsoft Visual Studio .NET Enterprise Architect Edition.

This project is designed for architects and developers alike. I will try to provide plenty of code along with the architectural concepts to let you see how to build service-oriented code that is reusable and robust.

A Pyramid of Architects

Any enterprise that has a large stake in information technologies (IT) is best served by having experts who understand the importance and the need for integrated solutions, and have the expertise to build such systems.

These experts can have many different titles, but I will describe them as a hierarchy of architects. Their job is to align the enterprise I.T. needs with business goals. They need tools that can help them achieve those goals. I will first describe the architectural hierarchy, and then describe the tools that can be used to allow the various professionals to communicate.

The pyramid in Figure 1 shows the hierarchy of architects in a typical enterprise.

Aa302162.foodmovers1_01(en-us,MSDN.10).gif

Figure 1. An enterprise has many different levels of architectural experts.

At the top is the strategic architect. This is the person who is responsible for overseeing the overall strategy of the information technologies areas of the enterprise. This is often the Chief Technical Officer, the Chief Information Officer, or someone else in an executive role.

The enterprise architect will usually oversee many applications and projects. He or she is responsible for cross-platform applications and their architectural decisions.

There are two more types of architects that exist in the enterprise. One of them is called a Project or a Solution architect. The project architect is responsible for a specific application and its architecture.

There are also Operational or Deployment Architects. A Deployment Architect is responsible for defining the operational requirements and executing the deployment of an application from an architectural standpoint.

Of course, this type of hierarchy does not exist in every organization, but I believe that, as enterprises move towards reusable services, an organizational structure such as this will evolve.

Visual Studio Enterprise Architect Edition (VSEA) is designed to be a complete design and development environment for projects ranging in size from single console applications to huge distributed systems. The product has the tools required for architecting the solution, from database table design to deployment strategies. VSEA also has the tools to design and implement the architecture.

VSEA can make the jobs of each one our architects easier. In this project, you will learn about the role of the various architects, and how they play a role in the development of an enterprise-scale system. In order to illustrate the role of the project architect and show the kinds of systems that can be built using VSEA, I will describe and build a real system that involves a fictitious company.

In this project, I will introduce you to FoodMovers Distribution Company. FoodMovers supplies hundreds of grocery stores and convenience stores by working with suppliers ranging in size from large conglomerates to small manufacturers and produce companies.

This first section discusses the architect-centered development process that is essential for modern, distributed systems. I talk briefly about the evolution of system development, from a process-centric view, to an era of data-centrism, and finally, to the new service-centric view of information management.

You'll learn about the players and their enterprise architectures and see the types of interfaces they will need to do business together. The key to this interoperability is the concept of exposing services, rather than applications or data. These services provide an access point to existing legacy applications behind the firewall, as well as providing appropriate, secure access to external business partners.

Then, you will learn about technologies that we will use, along with the design issues involved in implementing those technologies.

But first, a little background on the evolution of the project architect. If you are new to the concept of the project architect, this will give you some idea where the role came from. If you are currently a project architect, perhaps you will see yourself in this picture.

An Architect Designs Buildings, Right?

Back in the Stone Age, when mainframe computers ruled the earth, any large organization with a computer had teams of newly trained programmers who were given their assignments. They would work on their small part, unaware of the larger picture. The larger picture was the domain of the "Systems Analyst." In a typical organization, this job was usually held by a programmer who had plenty of time learning the ropes.

It was the Systems Analyst's job to understand the business of the organization in enough detail so that he or she could design systems that met the needs of the internal departments. The Systems Analyst would then break down the task into small enough pieces that a single programmer could code each piece. For mainframes, a great language was available that provided exactly that level of independent code development. COBOL, the Common Business-Oriented Language, allowed programmers to create programs that got their data by reading from databases located on the mighty mainframe. They crunched what they could, and placed the results back on to those central databases. As long as all programming was done on the organization's mainframe using COBOL, all applications could talk to each other and life was good. That is, if your department's computing needs were on the list of systems to be developed by the managers who controlled the scarce computing resources.

Flash forward thirty years and look back at what has happened. Mainframes are still around, but many have been relegated to small niches where huge-scale transactions are made. The worldwide airline and hotel reservation system, for example, would not be possible without the processing power, fault-tolerance, and stability of mainframe computers. The rest are running legacy applications such as inventory management or process control. Many of these applications have been replaced with small servers, which are much cheaper and easier to maintain.

In the place of mainframes, systems of every size, shape, and complexity have risen, along with specific computer languages that were developed to meet the computing needs of smaller organizations. Finally, departments within an organization could afford their own computer and did not need to rely on the high priests in the air-conditioned room.

The advent of these smaller systems in the late 1980's gave the mainframe people a real headache. At first, they tried to tame the onslaught of smaller, cheaper systems by shoehorning mainframe operating systems and, presumably, mainframe system development discipline, into the smaller boxes. This largely failed, and the computing genie was out of the bottle.

During the following decade or so, many strategic systems were built around these cheaper machines. Single programmers, without the discipline forced on them by COBOL and the scarce resource of a single monolithic computer, were building systems that organizations came to rely on. One case I am personally aware of is a retail store chain that you know. One person, who shall remain unnamed, single-handedly wrote an entire PC-based retail point-of-sale system that was rolled out to 200 stores worldwide. This is not the kind of system that should have been developed by a single person, because it was a strategic part of the company's operation.

But this project, and thousands like it, was developed by a single programmer or a small team to automate a particular area of operation, many times without consideration of the larger organizational goals or concerns for interoperability.

Interoperability was not really a concern until recently, however, because even if it were possible to connect systems, there were no agreed-upon methods for sharing data.

Then came XML.

XML provides a standard syntax for sharing information across departmental and organizational boundaries. With XML, an international standard that everyone agrees on, it is now possible to share data and integrate systems.

Now there's a way, but is there a will? What about all of those programmers who grew up with a computer of their own? How can we instill in them the discipline required to create a whole system that solves the needs of the organization?

We need someone in an organization who understands the needs of disparate departments and how information flowing between them can benefit the organization as a whole. We need someone who can design systems that can be broken up into small enough pieces that they can be developed, but flexible enough that the pieces can work together when they are built.

Does this sound familiar? We used to call this person a "Systems Analyst," because his or her job was to understand need for systems and how to create them. Like the computers that they work with, our next generation analyst must also evolve and grow to fit into the current environment.

That person is now called a "project architect." In the building realm, an architect is entrusted with designing a living or working space in which his clients will be happy. America's greatest architect, Frank Lloyd Wright, claimed that he could design a house in which a perfectly happy couple would be divorced in a year. In other words, an architect is one part designer, one part engineer, and one part psychologist. Designing a house is much more than just making it look pretty and keeping out the elements. An architect must understand the needs of the participants in order to be successful.

Like its namesake in the building realm, the project architect must understand not only how a system works and what it takes to make it work technically, but also why it is necessary in the first place. The successful project architect will build systems that can stitch disparate systems together, even if they are built on older, even obsolete technology. The project architect must then be able to describe that vision to decision makers, users, and most importantly, developers who will bring the code to life.

But the world has gotten much more complex since the days of the monolithic mainframe. Different programming languages, disparate platforms, local and worldwide networks, and new security concerns have added up to an environment where developing non-trivial systems has become a difficult process. Fortunately, the tools have gotten better.

Microsoft's Visual Studio .NET Enterprise Architect Edition has the tools that the project architects and Deployment Architects need to be able to design, describe, and deploy systems. Integrated into the product are the tools that allow developers to take the vision of the project architect and turn it into working code.

Let's visit Frank Lloyd Wright again. Once his practice was running and his services were in great demand, Wright did not actually draft his drawings. In many cases, he did not even do the design for particular parts of a building. Like any successful firm, Wright employed smart people to work as a team to get things done.

Wright would conceptualize a building or a community and give his vision to a project architect, who would work with other architects who have a specific understanding of a particular task, structural engineering or plumbing, for example.

If Wright's firm was an IT department, he would be the senior project architect who probably would not even use a computer except to write email and the occasional Word document. He would have a project architect who was responsible for the accounting systems. He would have another project architect who was responsible for production management and control. Each of these project architects would have years of experience in their areas, and teams that were able to produce their designs. However, they would rely on the position of the project architect to build a strategy for integrating systems across Project-Architect boundaries.

If Wright is the head project architect, you can think of the enterprise architect as the civic planner for the town in which the project architect designs and builds his buildings. The enterprise architect must plan for a larger scope than the project architect sees. Just as a civic planner must consider the larger issues of effective land use, commercial and residential zoning, and future growth issues, the enterprise architect in an IT department must think about issues larger than any single application. The enterprise architect must be able to see the larger enterprise picture, and plan for the optimum use of existing systems and the impact of any existing systems on future architectural plans.

For the remainder of this project, I will be focusing on the project architect who has been given the responsibility for this project. I will assume that the project architect is keeping the enterprise architect informed where necessary, but the responsibility for creating this system is the responsibility of the project architect.

Project Development Phases

Now that we can see where the project architect comes from, let's talk about a vision for developing systems in an enterprise setting.

There are four main phases of an enterprise project: Architecture, Design, Implementation, and Deployment.

Architecture

The architecture phase involves conceptualizing the system by understanding the nature of the project and the assets that are available to build it. As an analogy, consider our building architect again. Once the needs are assessed, a set of blueprints are drawn. These blueprints become the conceptual manifestation of the design of the building, and provide a basis for the architect to communicate his or her vision to the people involved.

These blueprints can be modified easily, and usually are as people take a look at the design and comment on its value to them.

Design

The design phase includes the design of two areas: infrastructure and application. Infrastructure design includes the identification of the servers, legacy systems, firewalls, and other parts of the organization that need to be developed, as well as the design of the physical boxes and where they will be deployed. Will they be deployed in DMZ? Will they be deployed in the secure zone? What applications will be deployed in these boxes such as testing, staging, and production servers? These decisions are important for network security as well as for performance.

Application design includes the programs that must be written to create the system and integrate the infrastructure. These decisions include identifying application level security, transactions, reliable messaging, database modeling, and business workflows.

The design of the application and services, which involves the presentation layer, the business layer, and the data access layer, is among the tasks of the project architect, who also must design the security, operational management and communications policies.

Implementation

Once the blueprints are completed and signed-off by the responsible parties, building construction can be started. In the IT realm, implementation involves the actual coding of the project by developers. This involves first creating database tables and possibly loading sample data. Then comes the main task of programming business logic using an appropriate procedural language such as C# or Microsoft Visual Basic®.

Implementation also includes testing of each component, and testing of all the components running together.

Deployment and Operations

Once databases are designed and populated, code is written and tested, and the system is documented for users, it is time to deploy.

Deployment involves the rollout of the system, including training of the users, testing and ongoing system administration, and tuning of the myriad components and subsystems.

Roles, Tasks, and Tools

Getting a project completed requires many different people. Departmental users, IT professionals, and external business partners are among them. A modern system can consist of integrating different legacy systems, written in many different programming languages. A system can have many different parts and subprojects. How can all these people from different backgrounds be brought together in a development environment?

In a typical IT department, every person has different knowledge and set of skills. These can be combined into a small number of roles.

Each role requires tools for accomplishing their tasks. With these tools they will work independently but within the limits of the project's architecture and agreed technological and business rules.

For developing distributed applications we need a development environment that glues people's work together with the organization's assets. The best way to do this is to abide by an agreed-upon set of definitions and enforcements. Effective tools will provide this functionality.

In the IT department of an enterprise a successful project requires certain steps and interactions between the participants. I will call this interaction a "project workflow." The project workflow describes the interactions between people's roles, their tasks, and the tools that they use to accomplish these tasks. A generic project workflow is shown in Figure 2.

Aa302162.foodmovers1_02(en-us,MSDN.10).gif

Figure 2. A modern IT project requires many different individuals fulfilling many different roles. This is called the project workflow.

Project Architect

Let's take a look at a typical project. Someone somewhere in the organization has the need for an IT project. It usually ends up on a list that is maintained by the director of information technology or the enterprise architect. How it gets chosen for implementation varies by organization, but usually entails everything from real need to political manipulation to budgeting availability. However, those factors are beyond the scope of this section. Let's just assume that a project is accepted for implementation and a project architect is assigned. The project architect must understand any related systems that are in place and how they work. This is called use case analysis. This is human-to-human work, and requires skills that are not learned by sitting at a computer. Eventually, the project architect will collect enough notes about the requirements to start the process of building a set of "business workflows."

These business workflows will describe the sequence of tasks required to achieve the objectives of all the parties involved. They should be described at a level of detail that a developer can use to create code.

It is important to note that these business workflows have always been there in the enterprise. They probably have taken the form of "procedures" or "policies," which are sometimes written down and sometimes just in the heads of the people who do them on a day-to-day basis. The project architect's job is to find out what these business workflows really are and describe them in terms of a holistic scenario.

Once the information is collected, the project architect's job is to document the business workflow into a readable standard format so that it can be communicated back to the parties involved. The tool used to accomplish this task is Microsoft Visio®. Creating flow diagrams with Visio provides a clear picture of the system, and provides a basis for talking about the proposed design of the system. Visio has great tools for turning these workflows into code architecture with its interfaces to Visual Studio. Figure 3 shows a typical flowchart created using Visio.

Aa302162.foodmovers1_03(en-us,MSDN.10).gif

Figure 3. Visio provides a rich interface for developing flow charts.

As part of the architecture, the project architect must define how the existing system pieces and the new system pieces will be built and integrated, what kind of technologies will be used, and what the data flowing between them looks like. This will be the skeleton of the solution before it is handed to the developer.

The application architecture of the system will define the user interfaces such as forms in a Windows application or the design of a mobile application. It will also define, in detail, the design of required databases and the classes and components that will implement the business workflows. The class design, service interfaces and contracts will also be designed using Visio. In addition, transactions, security, and reliable messaging requirements will be a part of the application design and business workflows.

The project architect also provides database design documents using Visio. See Figure 4. Developers will take these Visio documents and turn them into SQL tables, stored procedures, and database transactions combined with the workflow documents.

Aa302162.foodmovers1_04(en-us,MSDN.10).gif

Figure 4. Databases that are designed in Visio can be exported to SQL Server.

The logical data modeling is done using Visio. However, the physical data modeling is done using Microsoft SQL Server™ tools. SQL Server tools are very advanced for the physical data modeling, whereas Visio tools are not targeted for the physical data modeling.

After the system is designed and agreed-upon by the interested parties, it is time to turn it over to the next group, the developers. These documents are great help for the developers, who will use them to write their code.

Templates and policies

But remember our history lesson above. The modern developer has so many choices of tools, languages, and development environments that there are no guarantees that every developer will interpret the documents and the technology in the same way and work in a complete choreography. In short, there is no discipline. We could end up with a system written by many different programmers, each using their favorite programming language, preferred database connectivity methods, and set of tools. The system would work initially, but be difficult to maintain and impossible to troubleshoot.

When you drive down the street and see a speed-limit sign, you might view it as no more than a suggested speed that you can ignore if you feel like it. But, if you know that this street has been watched by police lately, you will take the speed limit sign a little more seriously. The same goes with system development. For a reliable system to be built, you need two things: specification and enforcement. A speed limit sign without the threat of being caught is just a suggestion.

As I mentioned above, the project architect's business workflows done in Visio, along with their descriptions, comprise the specification. However, a developer needs some help in implementing these specifications. The help is contained in a Microsoft technology called enterprise templates. Enterprise templates are a part of VSEA, and are expressed as an XML vocabulary called ETP. The following code block shows a sample ETP document. ETP documents allow a project architect to design a set of tools and technologies that can be used for various parts of a system. Enterprise templates give flexibility to the project architect to design a system that the project architect wants while giving help to the developer and reducing the learning curve of the project.

<?xml version="1.0"?>
<EFPROJECT>
   <GENERAL>
      <BANNER>Microsoft Visual Studio Distributed Application 
         Template File</BANNER>
      <VERSION>1.00</VERSION>
      <REFERENCES>
         <REFERENCE>
            <FILE>BusinessFacadeProjects\BusinessFacadeProjects.etp</FILE>
         </REFERENCE>
         <REFERENCE>
            <FILE>BusinessRulesProjects\BusinessRulesProjects.etp</FILE>
         </REFERENCE>
         <REFERENCE>
            <FILE>WebServiceProjects\WebServiceProjects.etp</FILE>
         </REFERENCE>
         <REFERENCE>
            <FILE>WebUIProjects\WebUIProjects.etp</FILE>
         </REFERENCE>
         <REFERENCE>
            <FILE>DataAccessProjects\DataAccessProjects.etp</FILE>
         </REFERENCE>
         <REFERENCE>
            <FILE>WinUIProjects\WinUIProjects.etp</FILE>
         </REFERENCE>
         <REFERENCE>
            <FILE>SystemFrameworksProjects\SystemFrameworksProjects.etp</FILE>
         </REFERENCE>
      </REFERENCES>
      <VIEWS>
         <PROJECTEXPLORER>
            <FILE>BusinessFacadeProjects\BusinessFacadeProjects.etp</FILE>
            <FILE>BusinessRulesProjects\BusinessRulesProjects.etp</FILE>
            <FILE>DataAccessProjects\DataAccessProjects.etp</FILE>
            <FILE>WebUIProjects\WebUIProjects.etp</FILE>
            <FILE>WinUIProjects\WinUIProjects.etp</FILE>
            <FILE>SystemFrameworksProjects\SystemFrameworksProjects.etp</FILE>
         </PROJECTEXPLORER>
      </VIEWS>
   </GENERAL>
   <GLOBALS>
      <GLOBALENTRY>
         <NAME>TDLFILE</NAME>
         <VALUE>DAP.tdl</VALUE>
      </GLOBALENTRY>
      <GLOBALENTRY>
         <NAME>TDLELEMENTTYPE</NAME>
         <VALUE>DistributedApplicationFull</VALUE>
      </GLOBALENTRY>
   </GLOBALS>
</EFPROJECT>

The project architect describes the project implementation as a collection of projects, modules, classes, and files. This is an important step in distributed computing where there are many layers such as data access, business logic, and many different presentation layers to be implemented by many developers in a cooperative environment. The result of this step is an enterprise template in which decisions for project types are made for the developers before they even open the project. This is a great help and reduces workload and planning time for developers.

What about the enforcement and discipline among developers that implement a distributed computing project in a cooperative environment where each has a different background and choices? The template only provides the programmer with a skeleton of empty project files. There is nothing to prevent the programmer from using, say Visual Basic, even though the decision has been made for all classes to be written in C#.

"The discipline language"

Policy files can be created that enforce corporate development policies in a way that is tightly integrated into Visual Studio. Policy files are written and stored in an XML vocabulary called Template Design Language (TDL). An excerpt of a TDL file is shown below.

...
<ELEMENT>
   <ID>projEnterpriseTemplateProject</ID>
   <PROTOTYPES>
      <PROTOTYPE>[EF]\Projects\Enterprise Templates Project\Enterprise 
         Templates Project.etp</PROTOTYPE>
   </PROTOTYPES>
</ELEMENT><!-- CSharp Language Projects -->
<ELEMENT>
   <ID>projCSharpProject</ID>
   <IDENTIFIERS>
      <IDENTIFIER>
         <TYPE>PROJECT</TYPE>
         <IDENTIFIERDATA>
            <NAME>FileExtension</NAME>
            <VALUE>.csproj</VALUE>
         </IDENTIFIERDATA>
      </IDENTIFIER>
   </IDENTIFIERS>
</ELEMENT>
<ELEMENT>
   <ID>projCSharpWinApp</ID>
   <PROTOTYPES>
      <PROTOTYPE>[VC#]\CSharpProjects\CSharpEXE.vsz</PROTOTYPE>
   </PROTOTYPES>
</ELEMENT>
<ELEMENT>
   <ID>projCSharpConsoleApp</ID>
   <PROTOTYPES>
      <PROTOTYPE>[VC#]\CSharpProjects\CSharpConsole.vsz</PROTOTYPE>
   </PROTOTYPES>
</ELEMENT>
<ELEMENT>
   <ID>projCSharpClassLibrary</ID>
   <PROTOTYPES>
      <PROTOTYPE>[VC#]\CSharpProjects\CSharpDLL.vsz</PROTOTYPE>
   </PROTOTYPES>
</ELEMENT>
...

TDL expresses policy rules for distributed applications. By using the policy file along with enterprise templates, the project architect can design and describe the outline of the project and then associate a policy with a project so that the developers are limited with the design and cannot add any other type files that are outside of the policy rules. The policy file brings a discipline to the development environment. In fact, I think of TDL as "The Discipline Language."

With the policy file, the project architects can limit the projects that can be added to a template, disable toolbox or menu items in a template, assign default values to a property, or limit the range of allowed values for a project. For example, a policy file can enforce developers to use only certain code or class or components in the business logic. Or it can limit a certain color for the forms' background in the user interface. Another example is that a policy file can specify that developers use only OLE DB database controls to access the database. Many other design, technology, and business rules can be embedded in the policy files.

The Visio documents, the project descriptions, and the enterprise templates and policies comprise the skeleton of the project. Now developers' work will put meat, muscles, and skin to make it a viable system.

Developers

A developer, after understanding the nature of the project and his or her individual assignment, will open a new project with the enterprise template assigned and will start developing the solution in each layer.

The developers will code the business logic using programming languages that are provided by VSEA and enforced by the policy section of the TDL file. The classes, modules, and procedures are developed in order to accomplish the functionality as documented in the business workflow documents.

Data Access is developed using the tools available in Visual Studio. Our developers will program the tables, stored procedures, and database transactions as required by the database design documents. SQL Server Desktop Engine can be used by the developer for accessing SQL Servers from within the Visual Studio development environment.

For reporting needs, Crystal Reports can be added and programmed.

Users and Operators

The last steps of the project are testing, deployment and operation of the system. People who will be using the system on a daily basis, and operators who will be monitoring the system as it runs, are the roles for these steps.

VSEA provides server components, Visual Studio Analyzer, redistributing applications, and Application Center Test (ACT). By using ACT, the developers can create stress tests for the user interfaces that simulate hundreds or thousands of simultaneous users. According to the results of these tests, the speed and responsiveness of the Web applications can be noted and improved where necessary.

Summary

The most successful systems rely on careful analysis of requirements and the disciplined development and deployment. Consider the project architect's namesake in the building realm. An architect who designs buildings knows that moving a window is much easier when the design is in the blueprint stage then after the building has been built and occupied. The same goes for IT system design.

The project architect is a critical part of any modern system design. Microsoft Visual Studio .NET Enterprise Architect Edition has the tools the project architect needs to build distributed applications that work in today's modern IT environment.

FoodMovers Scenario

Now that you have been introduced to the project architect, let's take a look at a scenario that brings together everything I have been talking about. In this section, I will describe our fictional company, and in the next several sections in this project, I will build working code that brings together the technologies to solve the common problems of internal application-to-application and external business-to-business interactions.

Do You Know Where Those Beans Have Been?

When you buy a can of beans from your local Safeway and take it home for dinner, you have completed a chain of events that started in a bean field somewhere in the world and involved billions of dollars worth of physical and IT infrastructure.

In the case of your bean purchase from Safeway, the chain of events is tightly controlled by the Safeway Corporation. Safeway, like many other large, national chains, owns their own distribution network to take products from field to store shelf. Owning a distribution network is an expensive undertaking that can cut costs, but only for the largest chains.

Now let's say that you bought your beans from the local market, Garcia's Food and Lotto, run by Manuel Garcia and his family down on the corner. Mr. Garcia runs a pretty good store, with plenty of fresh fruit and meats and the canned goods necessary to prepare a great dinner. However, with only one store and limited funds, Señor Garcia can't afford to send drivers to several dozen manufacturers' warehouses every day to fill up the store with the goods that his customers want.

That's where FoodMovers Distribution Company comes in.

FoodMovers is a distributor of products for small and medium-size grocery retailers. FoodMovers is a typical distributor, in that it buys products from manufacturers (suppliers), and sells them to its retailer customers (stores). Between buying and selling, FoodMovers stores items in their warehouse. The basic configuration is shown in Figure 5.

Aa302162.foodmovers1_05(en-us,MSDN.10).gif

Figure 5. FoodMovers Distribution Company sits between suppliers of products and retailers.

On the left side, various suppliers provide their goods to the distributor. This includes large canned goods and dry goods manufacturers like Campbell's Soup, Procter and Gamble, and Dole, as well as smaller, local suppliers of perishable goods, like produce suppliers and fresh meat cooperatives.

On the right side, FoodMovers' clients are small- to medium-sized grocery and convenience stores. These stores place their orders whenever they need products that FoodMovers distributes.

This system is as old as human trade itself and, while it involves a party between supplier and buyer (the notorious "middleman"), the system works and provides benefits for all parties.

As a case study in distributed computing and the power of Visual Studio .NET Enterprise Architect tools and resources, you can't beat the challenges a distributor has. FoodMovers must deal with all sizes of customers, from huge multinational corporations to small stores like Garcia's. And it must do so quickly and efficiently because the profit margins are slim and the products perishable.

Players

In our scenario, I will be working with two suppliers and several grocery stores. Our suppliers are the fictional companies, Good Old Soup Company and Hearty Soup Company.

Good Old Soup Company

Good Old Soup has been making canned heaven for nearly a hundred years. They implemented an electronic data interchange (EDI) system two decades ago when everyone else seemed to be doing it. This system made it possible for the company to grow in an environment when large grocery chains were starting to integrate their systems. EDI provided a way for distributors and chains to order product electronically without too much human intervention. EDI worked pretty well at the time, because it was the only game around. It operates over a very expensive network called VAN (value-added network). If FoodMovers wants to play with the big old companies, then it needs to deal with EDI. Fortunately, FoodMovers has, as one of its tools, Microsoft BizTalk® Server, which understands EDI and allows us to accept and process transactions in an EDI environment.

Hearty Soup Company

Hearty Soup Company is a newcomer to the consumer soup market. They make soups aimed at the health-conscious consumer. Because they are a new company, they do not have the burden of the EDI infrastructure that Good Old Soup has in place. They do, however, understand the advantage of working with electronic transactions over cheap networks.

Hearty has developed an XML-based Web services environment where they can access information from their business partners in real time. They can also place orders using this interface.

Stores

As I mentioned before, FoodMovers deals only with small and medium-size stores, because larger store chains have their own integrated distribution networks. These stores range in size from a single "corner store" to local chains with up to five stores.

Owners of the smaller stores are usually running on thin profit margins and cannot afford expensive interface equipment. Therefore, FoodMovers has a Web-based interface (WebUI) that allows stores to access the order entry system from any basic Web browser.

As an alternative to the Web-based system, FoodMovers allows stores to place orders using Pocket PC devices or send orders in batch using a Microsoft Excel spreadsheet.

Databases

Once the project architect knows the company's processes and transactions related with these processes, he or she can start designing the database.

Inserting, updating, or deleting data in SQL Server is a transactional process. The application invoking the update needs to be aware of the results of this process in order to sustain database integrity. If the result is an error, then the group of updates that have yet to be committed needs to be rolled back. Since SQL Server supports transactions, the transactions in the inserts, updates, and deletions will be built through SQL Server stored procedures. This will ensure the integrity of database tables.

All databases in our system will have transactions turned on. This means that the programs we write must check after each stored procedure call to assure that the transactions have been committed.

The project architect will design the database tables, the data tiers, SQL operations, SQL stored procedures and the data access strategies. The data must be secured from malicious access. To assure security and data integrity there will be a single layer that will communicate directly with data. In the FoodMovers project, the .NET DataSet object will be used for access to the data sources. The DataSet object, along with its companion DataTable, comprise the data access layer.

I will discuss the design of our databases and the tools in VSEA that helps this process in the next section.

Service-Oriented Architecture

Because of the nature of the FoodMovers project, the organization has determined that a service-oriented architecture is our best approach for implementation. A service-oriented architecture aggregates business logic and data classes into a logical business definition that is called a service.

Contrast this approach with a traditional process-oriented architecture, which uses processes to act on data. A process-oriented architecture requires that the system have intimate knowledge of the process details in order to interface with them. Data is accessed and manipulated by these processes, which can lead to corruption of the data and can compromise security.

A service-oriented architecture, on the other hand, exposes the processes and data together as a homogenized service, which can be utilized where necessary while keeping the data isolated, secure, and centrally managed.

The main advantages of a service-oriented architecture are:

  • A service encapsulates complexity at any level that makes sense.
  • A service can be upgraded or improved without breaking downstream processes.
  • A service can be exposed to internal services or applications as well as external services or applications.
  • Service-oriented architecture uses open standards, such as XML and HTTP so it can communicate with the outside world.
  • Services use a common interface for communication, making management and operation easier.

Using a service-oriented architecture solves many integration problems we will encounter in the development of the FoodMovers system.

The concept of service-oriented architectures has been around for several years. It seems that there is some minor disagreement in the IT community concerning exactly what service-oriented architecture means. Some architects believe that a service-oriented architecture is just as I have described above. Others believe that a true service-oriented architecture requires a larger view of the enterprise architecture. For these professionals, simply creating objects that expose services does not constitute a service-oriented architecture. Rather, a service-oriented architecture requires the encapsulation of entire business processes, regardless of the platform on which they operate. To the keepers of this larger view of service-oriented architectures, our FoodMovers system would be considered just an application that exposes data and business logic as services, and not a true service-oriented architecture.

Whatever your frame of reference is, I hope you can see that there is much similarity between these two views of service-oriented architectures. To us, the difference is mostly scope, and will appear different depending upon your viewpoint.

Our application, FoodMovers, is intentionally simple in comparison to a typical enterprise-scale application. I wanted to make a system that was realistic while making it accessible and able to be described relatively briefly. In real life, there would be many more existing applications, processes, and components that would be considered for integration into the system. Encapsulating existing systems and functionality is a place where service-oriented applications really shine.

For the remainder of this project, I will define a service-oriented architecture as an architecture that exposes data and business logic as services rather than processes or objects.

You'll learn about integration using service-oriented architecture in the third and the fourth sections of this project: Section 3, Developing Service-Oriented Architectures and Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration.

Processes

There are seven distinct business processes that we will be developing in our system. The graphic in Figure 6 shows all processes. In this section, I will describe them briefly, and explain the interfaces required. In later sections, I will explain each process in depth and build a working system.

Aa302162.foodmovers1_06(en-us,MSDN.10).gif

Figure 6. FoodMovers uses four different internal subsystems (managers) and interfaces to internal and external systems.

Each process flow has been described by the project architect using Microsoft Visio. Visio provides a nice interface for creating these flows, as well as providing a collaborative editing and maintenance environment so that other parties can be involved in the design as it progresses.

Each process is described below, along with a description that will be used by the developer when it comes time to code.

1. Item maintenance

FoodMovers uses a mainframe computer running an inventory tracking application. The reason this is on a mainframe is mostly a matter of past decisions and available software. The grocery industry runs on mainframes. In fact, one of the first large-scale applications for computers in general was designed to track the Universal Product Code (UPC). The UPC system was established by consumer goods manufacturers in the 1970's. This is the ubiquitous barcode that the checkout clerk scans before you pay for groceries. These codes are used everywhere in the grocery environment, from the obvious checkout process to feedback to manufacturers provided by data analysis providers. The Uniform Code Council (UCC) assigns manufacturers blocks of UPC numbers. It is up to each manufacturer to assign a UPC to each of their products, and provide information about that item to the appropriate parties. The UPC is a unique number and, as such, makes a perfect database key to track a particular item.

Software was developed by IBM and other integrators to track items that have a UPC. Since there are potentially hundreds of millions of UPC numbers, the databases that contain all item information are huge. At the time this software was written, the only computers were mainframes. And still, to this day, most software that tracks UPC item information runs on mainframes of some shape or size. The next time you buy groceries, notice the terminal the cashier is using. It probably says "IBM" on it. Somewhere in the store, a mainframe (which probably sits under a desk now) connects all terminals together with a service that updates item information and uploads sales data to aggregators.

Since each manufacturer is responsible for maintaining its block of UPC numbers, it is the responsibility of the manufacturer to inform FoodMovers about any new items it has, or any items that have been discontinued or changed.

FoodMovers has two interfaces that allow this. First, there is an EDI interface. Good Old Soup Company sends to FoodMovers a list of "adds," "deletes," and "updates" in the form of an EDI 888 document. FoodMovers also has a Web service that accepts maintenance records as an XML document.

The process is illustrated in Figure 7.

Functional Diagram Process Flow
Click here for larger image Click here for larger image

Figure 7. The Item Maintenance process accepts maintenance information in two different forms in order to update the mainframe database.

Both types of documents are processed the same way internally. The Update Manager is a Web service that accepts item maintenance documents, processes them according to what type they are, then passes them to the mainframe.

Dealing with our mainframe is a difficult task for many reasons. First, the mainframe has specific ways of dealing with data, and it is difficult to change its behavior. The programs on the mainframe have been running for years, and making changes to the program involve bringing parts of the system offline while testing and deployment are being done. Plus, all of our COBOL programmers are retired and living in Florida on their Y2K consulting income.

We are using modern programming techniques with data that is possibly quite foreign to the mainframe computer. We will be using XML and other modern standards, while receiving the data over the Internet, possibly encrypted and packaged in a way to make it easier to transport.

We need to create an interface into the mainframe such that the data that comes in can be massaged to make it compatible with what the mainframe expects. This interface must also understand the way the mainframe communicates with the world, which is usually through an arcane set of network protocols. We need to do all of this while convincing the mainframe that the data is coming in the way it has always come in.

Microsoft BizTalk Server was designed for just this task. BizTalk Server has adapters that allow it to connect on the physical as well as the communications protocol layers. BizTalk Server has a built-in workflow execution engine called BizTalk Orchestration that provides a way for it to anticipate and adapt to the responses from the mainframe. Finally, BizTalk Server provides a rich interface to allow us to connect using modern tools, such as COM and .NET Framework.

The Item Maintenance process occurs between external partners and FoodMovers applications. Since it is a document exchange over the public HTTP network infrastructure, the document must be secured. It is sent from the supplier as an encrypted file, with the public key reference attached to the header.

The Update Manager, after it receives the document, sends it to BizTalk Server, which decrypts the encrypted document using the public key for the supplier, determines what kind of file it is, either EDI or XML, and routes it to the appropriate flow. BizTalk Server has a mapping function that allows it to map any format to any other format using XSLT, the Extensible Stylesheet Language Transformation. By using the BizTalk Mapper, we will create an XSLT map that transforms EDI to a flat file format required by the mainframe. We will create a second map that converts the XML document received into the same flat file format.

Once the document is transformed, BizTalk Server sends the flat file to the mainframe and invokes its program that loads the file into its live database.

However, does everything go as smoothly? How can we be sure that the item maintenance document is committed to the mainframe? BizTalk Server or Update Manager doesn't have a connection with the mainframe throughout the update process. The mainframe probably processes documents in batch once a day. If a problem occurs in the mainframe application, then how will this problem be carried to the Update Manager? All of these questions can be addressed if we implement the concept of a long-running transaction.

After BizTalk Server sends the flat file to the mainframe, it waits for the confirmation document. This can take up to a day or two. This is defined as a "long-running transaction."

As BizTalk Server gets the confirmation file, it translates it to an XML vocabulary that the applications can understand. This confirmation contains information about the status of the mainframe update. Any errors are listed on an item-by-item basis. This confirmation is sent back to the supplier, who must deal with it.

2. Synchronize item database

Now that the updates have been applied to the mainframe database, it is consistent and has integrity. The advantage of a mainframe is that the software to manage item information is reliable and available. The disadvantage of the mainframe is that it is difficult to program and even more difficult to find programmers. The art of mainframe programming has been largely lost due to the advent of more elegant languages with better programmer tools and cheaper hardware.

We want to get the benefit of the mainframe in its ability to manage the item information changes, but we also want to be able to develop on more mainstream platforms, using tools that are cheaper and scalable. In other words, we want to use Visual Studio and SQL Server.

With the item information locked inside the mainframe-managed tables, it would be difficult to write programs on smaller machines. Every time we wanted to get a piece of data, we would need to connect to the mainframe, using its arcane language, and compete with other applications for the data.

So we want to put our data in tables managed by SQL Server. This will give us the real-time performance we need, without relying on the mainframe interface. The problem with this approach is that if we are managing two different databases simultaneously, it is almost impossible to keep them in sync. Our data is already coming into the mainframes, and we want to keep it that way. How can we get the data to our SQL Server tables?

First, we need to employ a philosophy that the mainframe inventory database is the canonical source. It is managed by the processes that add, delete, and update item information as it comes from the manufacturer. At any time, we can go to that database and get the truth about a particular item. Any other sources of data are not as reliable, and should never be written to.

Second we need to perform a synchronization task daily. Every night, after all item maintenance has been received by the suppliers, a process is run that copies the entire contents of the mainframe item database to a "staging" database managed by SQL Server. This staging database is a read-only copy of the real item database on the mainframe, but it provides the rest of the system with the data it needs to run the daily business and for synchronous access such as displaying the available items in a Web page for order entry.

This process is illustrated in Figure 8.

Functional Diagram Process Flow
Aa302162.foodmovers1_08a(en-us,MSDN.10).gif Aa302162.foodmovers1_08b(en-us,MSDN.10).gif

Figure 8. Every night, the contents of the mainframe item database are copied to the SQL Server item database.

It is important to note, from a development standpoint, that the SQL Server-managed data should never be updated. Also, if there is ever a conflict between the items in the SQL Server-managed database and the mainframe-managed database, the mainframe will always win.

I realize that this is not an ideal situation. Ideally, the data and processes that act on them would be managed in the same environment. That means that we would need to do all of our development on the mainframe or re-write the item maintenance functions in SQL Server. Eventually, we may get rid of the mainframe and port everything over to SQL Server, but for now we have BizTalk Server to let us live with the mainframe.

3. Enter Supplier Order

FoodMovers has a staff of buyers who are tasked with assuring that there is enough of each product on the shelves so it will be available when the stores make their orders. Too much inventory of a particular item means money that is needlessly tied up in non-productive property, consumes valuable space and heat, and subjects FoodMovers to inventory tax. Too little of a particular item means lost sales opportunities resulting in lower profits. Getting the inventory just right is one of the most important jobs at FoodMovers.

The subsystem that manages all orders is called the Order Manager. The Order Manager connects to the SupplierOrder database, which is controlled by SQL Server. This is illustrated in Figure 9.

Functional Diagram Process Flow
Click here for larger image Click here for larger image

Figure 9. The Order Manager system enters orders and maintains data integrity of orders for suppliers.

The buyers at FoodMovers have a Windows-based front-end (WinUI) to the Order Manager. This interface provides initial user validation of customer and item data, then sends the order to the Order Manager system. For each item ordered, the Order Manager verifies that the item information is correct, and that there is enough in inventory to fill the request.

Once an order has been verified, it is sent to the vendor. In the case of Good Old Soup Company a buyer will get on the phone and place an order by talking to another human.

Hearty Soup Company is trying to break into the consumer market with its health-conscious line of canned soups. In order to be successful, Hearty must forge deep lasting relationships with its customers, the distributors.

As part of its close relationship with FoodMovers, Hearty Soup Company has suggested that it be responsible for assuring that there is sufficient quantity of their products on the shelves at FoodMovers so they can sell it. FoodMovers likes this idea, because it eases the burden on its buyers. Of course, this situation is advantageous to Hearty, because it allows them to sell directly into their supply chain without being at the mercy of overworked buyers. This type of partnership involves a lot of trust, but is becoming more common as the worlds' businesses learn the advantages of a new electronic economy.

To this end, FoodMovers built an external interface to the Order Manager system that allows Hearty to place orders in the system when FoodMovers' stock gets low, based on a query of the inventory database described above. This is exposed as an external XML Web service.

In addition, Hearty Soup Company has created an XML Web service that allows its customers to place an order directly over the Web. The Order Manager knows which customers have this capability, and makes the appropriate transactions.

Since this XML Web service is exposed on the Internet, it is physically accessible to anyone. Since we only want our trusted partners to have access, security mechanisms need to be implemented to authenticate our users and prevent fraud.

Finally, the Order Manager updates the databases by adding records to the SupplierOrders database, which will be used later when the shipment arrives at the FoodMovers warehouse. Updating the database is a transactional process in order to maintain integrity between the database tables. The relationship between tables is significant for this kind of transaction.

4. Query Inventory Database

Now that its suppliers have the ability to place orders into FoodMovers' system, how do these suppliers know what inventory is needed? Working with Hearty, FoodMovers created a Web service interface that exposes inventory level information to its trusted partners.

Hearty queries this Web service from time to time and notes when inventory levels get lower than the two companies have agreed for each product. This is illustrated in Figure 10.

Functional Diagram Process Flow
Click here for larger image Click here for larger image

Figure 10. An external Web service is exposed so suppliers have the ability to interrogate inventory levels directly.

The request takes the form of an XML Web service transaction. In this transaction, a request document is sent to the Inventory Manager. This document contains requests for one or more items, keyed by the UPC number. The Inventory Manager connects to the SQL Suppliers database to verify that the vendor is valid and is cleared to make such requests.

Once it verifies the vendor, the Inventory Manager connects to the Items database and looks up each item in the requested list. As it does so, it builds a return list of quantities and errors, and then sends the list back as a Web service response.

This Web service is considered to be "synchronous." That is, once the request is made, the requesting system will wait until the responding system gives back something, either the information requested or some error response.

5. Receive supplier orders

After an order has been placed to a supplier—either by telephone or fax, or through the Web service ordering interface of the supplier—the supplier will deliver the goods to FoodMovers' warehouse. During the day, many shipments are received by warehouse personnel, who are equipped with portable devices that have Pocket PC software and a barcode scanner. As each item is received, it is scanned with the device, which sends a request via a wireless interface that feeds it to the Warehouse Manager. This is illustrated in Figure 11.

Functional Diagram Process Flow
Click here for larger image Click here for larger image

Figure 11. The Warehouse Manager integrates the warehouse receiving functions from a wireless device or a Windows terminal.

In addition to the wireless portable interface, there is a manual interface available, which takes the form of a Windows application (WinUI). It also feeds data into the Warehouse Manager. The Windows application allows for multiple items to be entered in a single screen.

Both of these interface logons need to be secured through a password protection mechanism, either by Windows logon or a browser-based logon.

When an item is received, whether it be through the compact interface or the Windows client, it is sent to the Warehouse Manager, which does a sanity check on the data, assuring that each item number is correct, then passes the information to the Inventory Manager, which processes each item by moving it from the pending orders area of the Orders database into the live inventory area of the Inventory database. At this point, the item is considered to be in active inventory, awaiting shipment to stores.

The implementation that updates the SupplierOrders database and Inventory database needs to use database transactions to ensure data integrity.

Suppliers will invoice FoodMovers after the products are shipped. In the case of Hearty Soup Co., the invoicing and payment will probably happen electronically. This will be an Electronic Funds Transfer (EFT) that will involve a third party, the bank. As a result, the document that orders and confirms the payment will route from FoodMovers to the Hearty Soup Co.'s bank and then to the Hearty Soup Co. This is called Routing. The invoicing and payment is not in our scenario yet.

6. Enter store order

When stores need products carried by FoodMovers, they have several different ways to place an order. The Order Manager is the system that manages orders from any source and updates the appropriate databases.

Smaller stores that have a computer can log onto FoodMovers' Web site to place their orders one item at a time. In a typical scenario, a store owner would keep track during the day of what items need to be restocked. At the end of the day, the owner would log onto the interactive ordering site and place his order.

This is a convenient way to enter items, and is much more efficient than ordering by phone or fax. The Web site is optimized for smaller stores and provides an interface that is efficient over dial-up connections.

If a store wants to upgrade from this largely manual method, it can buy a wireless device with a barcode scanner running Pocket PC software. This is essentially the same device the warehouse personnel use to receive incoming orders, but has different software. In a typical scenario, a grocer would simply scan or enter the UPC number of the product and enter the quantity desired. The device would send a message directly to the compact interface at FoodMovers through a cellular data connection built into the device. Instant feedback is given, and the grocer knows that the order has been received. This makes inventory maintenance at the grocery store much easier and faster.

Medium-size store chains probably already have some kind of computer infrastructure that tracks sales throughout the day and keeps track of inventory levels for each of its products. This type of system usually provides a list of items that it needs to order every night. Since this list is already available, it would be redundant for someone to enter the data onto a Web site or use a scanner. For these stores, the FoodMovers order interface allows the upload of a Microsoft Excel spreadsheet containing the order information. FoodMovers IT personnel would work with these chains to teach them how to create a compatible spreadsheet and upload it over their Web interface.

User logins in the interfaces must be password protected for security reasons. User logins also help to customize the interface for particular store information. For example, the site can be optimized to show Mexican food and products for Garcia Food and Lotto after the login and present the new items for that vertical market. It can also be used to create profiles of frequently-ordered foods from each store.

The process is illustrated in Figure 12.

Functional Diagram Process Flow
Click here for larger image Click here for larger image

Figure 12. The Order Manager takes care of orders from stores from a number of different input types.

Regardless of how the order information is generated, it is sent to the Order Manager. The Order Manager first checks to assure that the store's credit is worthy. Sending merchandise to a customer that is late on its payments is not good business.

Once the credit has been assured, the Order Manager checks each item against the inventory database to make sure it is in stock. If not, an appropriate error is generated. If the item is in stock, an order is entered for the item to be delivered when it is requested.

The database operation involves an "insert" command. This transaction will be implemented with a stored procedure that spans the StoreOrders database with the correct information.

7. Ship to stores

Each morning, FoodMovers warehouse personnel must load their trucks for delivery to the stores. The Warehouse Manager system has an interface that prints a shipping manifest for all store orders due that day. This system is illustrated in Figure 13.

Functional Diagram Process Flow
Click here for larger image Click here for larger image

Figure 13. The Warehouse Manager provides a function for printing a set of shipping instructions for each truck.

In the morning, all orders from stores have been made using the Order Manager, and the StoreOrders database contains all orders that need to be shipped that day. The Warehouse Manager Web service requests from the Inventory Manager Web service a list of all orders that are ready to ship. The Inventory Manager prepares the list of items, sorted by store. For each item shipped, the Inventory Manager marks the item as shipped the StoreOrders database and subtracts the quantity from the Inventory database.

This implementation requires database transactions. Stored procedures will be implemented to update the tables.

The Inventory Manager sends the list back to the Warehouse Manager, which prints the report, sorted by which truck each order should go on.

In our scenario, this process is simplified. We are assuming that the item is shipped as soon as the report is printed. In a real-life situation, there would be a scanner to assure that the item got on the truck, then another scanner that tracked the item as it was delivered to the store. Only after it was in the store owners' hands would the item be marked as shipped. Our approximation of this process is good enough to show the data and process flows required.

Design Concepts

In creating a system as described above, there are certain design concepts that the project architect needs to understand in order to accomplish his or her goal. The architecture and the design of the application will form the skeleton of the project that will be handed to the developer teams for implementation.

In the design, the project architect will identify the components, the communication layers, security aspects, operational management, communications restrictions, allowances, publishing policies, and exception management.

The project architect will design the application layering, the communication between these layers, the data formats and protocols, and the interactions between the internal and external applications.

Transactions and Transaction Concepts

You are probably familiar with the concept of transactions in database implementation. However, in a service-oriented architecture, we have a new concept, a process called a "long-running transaction." Let's review a simple database transaction first.

A transaction can be defined as a series of actions that, combined, accomplish some task. A transaction defines a single unit of work although it can be made up of many different actions.

A database transaction works as "all or nothing"; if a transaction is successful, all of the data modifications done in the actions that define a transaction are committed. If an error occurs, then data modifications that have been done by the actions that define a transaction are undone.

For example, when I go to an ATM machine to get $40, there are many actions that take place. Once the system authenticates us with our super-secret code, the PIN, I have access to my account and select "withdraw $40." Once ordered, the ATM application checks to see if there is enough money in the account. Then it debits $40 from the account and credits the machine, causing cash to come out of the slot. This transaction is in two parts: 1) debit the account, 2) credit the machine. For a short while between those two actions, the money is in limbo.

What happens if a communications error prevents the credit signal from getting to the machine once the account is debited? In this case, the transaction is "rolled back," which will restore the database tables to the state they were in before the transaction started. In other words, the money gets put back in your account. Whether this transaction will be retried again or an error will be raised depends on the exception management policy of the application or service.

If all parts of the transaction are completed successfully, the transaction is "committed," which causes the database tables to be updated with the appropriate values simultaneously.

This is called an "ACID" transaction

Short-lived (ACID) transactions

An ACID transaction has 4 qualities that give it its name:

  • Atomicity: A transaction must be an atomic unit of work. Either all of its data modifications are performed or none of them is performed.
  • Consistency: When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction's modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly linked lists, must be correct at the end of the transaction.
  • Isolation: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state.
  • Durability: After a transaction has completed, its effects are permanently in place in the system. The modifications persist even in the event of a system failure.

SQL Server deals with ACID transactions. Exception handling is done in the data access layer.

Long-running transactions

ACID transactions usually take place over milliseconds. Keeping a transaction open requires resources. The longer a transaction is kept open, the greater the chance that other transactions will be open at the same time, consuming more resources.

In some cases, it might be desirable to keep a transaction open for a longer time then a few milliseconds. In order to conserve resources, we will use the concept of a long-running transaction.

A long-running transaction may consist of numerous consecutive ACID transactions, and can take minutes, hours, weeks or even months to complete. You don't want to tie up your computer resources waiting on such a transaction to complete.

Consider, for example, when Hearty Soup Company sends the Item Maintenance document to FoodMovers, it doesn't get a response back right away. This is an asynchronous process because the mainframe processes all updates in batch at the end of the day and cannot send a receipt back to Hearty Soup Company's service right away.

After the mainframe update process is complete, the Update Manager sends a confirmation to the Hearty Soup Company. These two services, Hearty Soup Company's Item Maintenance Service and FoodMovers' Update Manager Service communicate asynchronously in order to assure that the items in Hearty Soup Company are consistent with the data in FoodMovers. The two messages, covering a period of a day or weekend, constitute a single transaction.

Since the actions of a long-running transaction do not wait for each other when an error happens, the database cannot be rolled back as in an ACID transaction. So what happens if something goes wrong and needs to be restored in the case of a long-running transaction? The solution is to use compensating transactions. That is, negative actions and processes are executed to take the actions back. This needs to be included in our workflows and applications.

FoodMovers Project Layers

The FoodMovers project is composed of five distinct layers.

  • First, Users need to access the functionality of the system. A user can be a human or some process that needs to make use of the services in the system.
  • The Presentation Layer provides a view of the system from the user's standpoint. It communicates with the business logic and processes the output for some user interface.
  • The Business Logic Layer contains the business logic. It accesses the data and applies the business rules required to complete a transaction.
  • The Data Access Layer provides a single point of access for the data tables. It is comprised of data access and data structures.
  • Finally, Data Sources contain the data that is accessed through the Data Access Layer.

Services are exposed at the business layer, which can provide access to the system for users (through the presentation layer) or directly by internal or external applications.

The architecture is shown in Figure 14.

Aa302162.foodmovers1_14(en-us,MSDN.10).gif

Figure 14. FoodMovers project layers

Let's discuss each of these layers and what components make up each of these layers. These components will be developed in the development phase following the architecture and design decisions that the project architect has taken.

FoodMovers project layer design

A more detailed project layer design will lead us to identifying the components of the FoodMovers project. The components are any applications, classes, assemblies and user-interfaces programs.

The FoodMovers project components are sorted into layers as shown in Figure 15.

Aa302162.foodmovers1_15(en-us,MSDN.10).gif

Figure 15. FoodMovers project layer design

Each of these layers is described below.

User interface (UI) components and UI process components

FoodMovers has external users (stores and suppliers) as well as internal users (employees and contractor buyers). FoodMovers business processes require capturing data from internal and external users and returning usable data in human- and machine-readable formats.

User interface components include:

  • FoodMovers buyers use the Order Interface Supplier Orders application to enter supplier orders for selected items. FoodMovers sales personnel use the Order Interface Store Orders application to place orders for stores. The Order Interface is a Windows application.
  • FoodMovers warehouse personnel use the Warehouse Interface Receive Orders application to receive items as they are delivered to the FoodMovers warehouse into inventory. An alternative is the Compact Interface that allows warehouse personnel to scan the barcodes of the using the Pocket PC. Data gathered from either application will run through the same business logic route to be committed to the FoodMovers warehouse and then to its inventory. Warehouse personnel also use the Warehouse Interface to print a shipping manifest for orders to be loaded on trucks each morning.
  • Store personnel use different interfaces. FoodMovers provides stores with an ASP.NET Web Site that allows store personnel to enter orders directly. The system also allows store personnel to upload an Excel spreadsheet that specifies their order. Stores also have the option of using a Pocket PC device to place an order.
  • Services contained in the service interfaces are exposed to internal and external applications by using an ASP.NET page. This is the Web Service Access Point.

Service portfolios

"Service Portfolio" is a term used in many enterprises. Depending upon the organization, it can mean the high-level view of all services provided in line-of-business applications. In our application, however, I will bring the scope down a bit. Here, I will define a "service portfolio" as the collection of services used by our FoodMovers application. It is a set of interfaces that expose business logic and data as services. The communication involves service interfaces with other programs, which can be other services or client applications. A service portfolio can expose many service interfaces.

Warehouse Manager, Order Manager, Inventory Manager, and Update Manager are the service portfolios in our scenario.

Services can communicate with other services or applications in a synchronous or asynchronous model. They can exchange documents or method calls in a message-based, loosely coupled integration model. I will discuss integration and access models later in Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration.

Business workflows and business components

The business workflows define the sequence of actions and decisions that act on data to accomplish a business service. For example, the BizTalk Orchestration in the Item Maintenance business process defines the interaction in different BizTalk Processes and mainframe.

Some business actions and services are defined as a business component in one application or a class. For example, authenticating a user, either an internal employee or a store or supplier user, is a central service that all processes employ. The business logic for determining if a user has the appropriate credentials is exposed as a class in the business logic layer where all of our service portfolio managers have access to the same code.

Similar central services are found in the definitions surrounding the inventory and order tables. Keeping everything in a set of classes makes maintenance and updating easier, increases security, and cuts down on bugs.

The last item in the business logic layer is the utility process, Synchronize Item DB. This is a stand-alone console application that will be invoked as a system chron process.

Data access

All of the applications, classes, and services in our system need data. They need access to be able to read from the database, insert new data, and delete or update the existing data. For reasons you will learn in Section 2, Templates, Policies, Database, and Service Interface Design, keeping data access separate from the business logic and other layers is important for scalability, security, and maintainability.

The data layer consists of two distinct sets of classes:

  • Data access classes: In order to access the SQL Server database, we need a common method that can be maintained easily. The data access classes provide this. There is a single data access class for each table in the database. These classes invoke stored procedures to select, insert, update, and delete records.
  • Data structure classes: Once the data is accessed with the data access classes, it is placed in data structure classes. These classes usually correspond with the data tables, but sometimes they are defined as relations combining more than one table. These classes can be auto-generated by Visual Studio from XML Schemas.
  • Service manager: In the case where we are a client accessing a Web service, the data is requested using a service agent. The service manager gets requests from the business logic layer and formulates a request message, which is sent to the external service. The service manager also takes care of fielding the response and reporting it to the calling logic.

Data sources

The data sources include all of the data we will be working with. The Microsoft .NET Framework has a data layer that consists of two main pieces: ADO.NET and XML. The philosophy is that data is data no matter where it is managed or accessed. So it doesn't matter if the data comes from a SQL Server engine in the next room or from a Web service halfway around the world.

Our internal data is managed with SQL Server, but we also could get data from other sources. Examples of this would be credit checks or electronic funds transfers. These external data sources are available to us through the service manager.

Security, operational management and communications policies

In order to complete the design of our system, we need to consider security, operations, and communications issues as a part of each component in our system. I see these as building blocks that extend the project components from the user-interface to data-access components. The building blocks are illustrated in Figure 16.

Aa302162.foodmovers1_16(en-us,MSDN.10).gif

Figure 16. Security, operational management and communication policies can be implemented as building blocks.

Security policy

Security considerations include authorization, authentication, secure communication, auditing and profile management strategies.

Authentication is securely identifying the entity that is trying to access the services. Authorization is giving permissions and rights to accomplish tasks. It provides an environment where users assume roles for accessing services and applications.

Ensuring security at the communication layer is another important measure. For example, using SSL (secure sockets layer) or VPN (virtual private network) for opening secure channels is one way to establish a secure communication environment. Requiring message signatures and employing encryption mechanisms can be part of security as well.

Every business needs to track user and business activity in order to anticipate and respond to customer needs. Monitoring and auditing activity for daily business operations must be secured from malicious attacks, logins and changes. This is the process of auditing.

Profile management can be used to monitor suspicious activity, enforce password change policies and manage profiles of who can access, read, or write data.

Operational management policy

Once a system is deployed, it must be monitored. Designing a system involves consideration of the exception management, application monitoring, business monitoring, metadata definition, system configuration, and service location strategies.

An exception management policy must consider all possible problems that are encountered in the daily operation of the system. An exception management policy is more than just trapping run-time errors. An exception management policy could be stated as "there is no such thing as a user error," or "errors only happen once before they are dealt with." All exception management will evolve from that policy.

Technically, this means that anything that could raise an exception should report the exception to some manager rather than just aborting the application and losing the data. This might involve retrying the process, exposing dialog boxes with error detail, employing a try/catch mechanism to throw the exception to a higher level, or using the Windows event manager.

Generally, an exception management policy should strive to save in-process data, but short of that, should at least rollback any database transactions that are currently in effect at the time of an exception. Another goal of an enterprise-wide exception management policy is to provide the programmer with enough information that the system can be fixed to avoid similar exceptions in the future. This can be done with trace logs.

In daily operations components should be monitored for scalability, performance, and turnaround expectations. Monitoring also includes checking if the business processes are running with the expected parameters. In a data-rich environment such as FoodMovers, monitoring includes validating whether the external or internal services are sending messages with the agreed schemas.

Metadata can be used in many different locations. Metadata is the abstraction of certain data in the application. For example, metadata can be used to provide localization of an application for different languages. Metadata can also be used to simplify error handling; making an application more flexible with changing run-time conditions.

All of these topics are discussed in Section 6, Going Live: Instrumentation, Testing, and Deployment.

Communication policy

The decision to use a tightly coupled communication model versus a loosely coupled communication model is a part of the communication design. The traditional tightly coupled communication model is fast, but not scalable. Loosely coupled message-based communication models are scalable but not as fast because they require metadata overhead. You'll learn about these modes with integration models in Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration.

In the FoodMovers project, we will be using loosely coupled message-based communication. This involves XML Web services, which uses SOAP messages between the Managers and the presentation layer, and MSMQ messages between the Update Manager and BizTalk Server.

A message bus is used to when establishing communication between tiers.

Communication and data format passing through layers

It is important to design the communication between the layers and the format of the data and the protocol that is used in the communication.

Figure 17 shows the communication and data format between project layers and external or internal users.

Aa302162.foodmovers1_17(en-us,MSDN.10).gif

Figure 17. Data passing through the layers in different formats

Starting from the bottom, data in the Data Sources layer communicates with the Data Access Layer. The inter-layer communication depends upon the type of the data source. Internal data that is managed by SQL Server exposes its data using a DataSet object. If the data comes from external sources using XML Web services, the data is exposed as an XML payload of a SOAP document.

Once the data access objects have retrieved the data, the data structures are filled. This takes the form of DataTable objects, which are passed to the Business Logic components. The DataTable objects are created by Visual Studio from an XML Schema that represents the SQL database tables.

The managers in the Business Logic Layer process the DataTable objects and expose their data to the Presentation Layer as SOAP messages encoded as XML.

The business components in the business layer communicate with each other using a message-based communication model. The service interfaces in FoodMovers are exposed as XML Web services. They communicate with each other sending SOAP messages in a loosely coupled manner. The service interfaces communicate with BizTalk Server Orchestration using messages, as well. In this case the underlying protocol used is MSMQ, which is a message queue. MSMQ ensures loosely coupled integration as the rest of the system.

In the case of Web page user-interface, the data is presented in HTML format, optimized for human eyeballs.

Using this model, it makes it easy to understand how data is moved from one layer to the next. In general, a request is made by moving down through the layers. Responses are sent back up through the layers.

A typical example in the FoodMovers scenario is getting a list of all product categories. An application (WinUI or Web page) will ask for a list of categories. The ASP server on the presentation layer will respond to the request by issuing a service request to the Order Manager as a SOAP Web service. Classes in the Business Logic layer will retrieve this information as an XML document, invoke whatever business logic is necessary for the request, and pass it along to the internal data sources layer. This usually takes the form of a SQL stored procedure for efficiency and scalability.

Now the data starts its journey back up through the stack.

The Data Access Layer retrieves the raw data generated by the stored procedure as a DataSet object and loads it into a DataTable object. This object is passed up to the Business Logic Layer, where it is processed according to the business requirements.

The Presentation Layer has had a SOAP connection open waiting for a response. The response comes back as a SOAP package that contains the data. The .NET Framework has tools for serializing the DataTable into an XML document and then wrapping it with a SOAP response package, which is sent to the Presentation Layer.

The Presentation Layer deserializes the package and processes it according to what the user needs.

Finally, the user sees the data.

Conclusion

In this section, you were introduced to the various architects in an enterprise. First, the enterprise architect, who is an IT professional tasked with the design, development, and deployment of modern distributed applications. We also met the project architect, who is tasked with creating a system that solves a particular set of requirements for the enterprise.

You also were introduced to an organization that has problems with interconnecting its internal systems as well as interfacing with external business partners. Next, you learned about each of the subsystems and the business workflows associated with each. Some important issues we've seen throughout are security in the applications, reliable messaging, and transaction handling. In the following sections, I will cover each of these systems and write code to make them all work.

In the following sections, I will cover each of these systems and write code to make them all work. We will also build this entire system one piece at a time to show the power of Visual Studio .NET. The most important part of the entire process will be the tasks which the project architect goes through to design and track the system as it is developed and deployed. The following sections will cover:

Section 2, Templates, Policies, Database, and Service Interface Design

The tools included in the Microsoft .NET Framework and Visual Studio Enterprise Architect Edition provide rich support for data of any type. Data in the .NET Framework is handled with a single data layer, which is part of the common language runtime namespace. This data layer treats all data the same, whether that data comes in the form of SQL Server data tables, XML documents, or external Web services. To the .NET Framework, data is data.

Enterprise Templates and Policy files can also be developed, which give our programmers information for building their applications and assuring that all developers work with the same organizational development policies.

In this section, I describe the process of creating database definitions and enterprise template files. I also talk about our data access strategy and show how we can implement the data layers I described in this first section. Finally, I talk about the architecture of the deployed system, including firewalls, farms, and clustered servers.

Section 3, Developing Service-Oriented Architectures

Getting legacy applications to work together is one of the most time-consuming and expensive areas for the modern IT department. In the past 40 years, many different systems have been created that do not play nice with each other. Getting these systems to talk is critical if an organization is to expose their internal business processes to internal departments as well as trusted external partners.

In this section, I will discuss the concept of service-oriented architectures (SOA), which provides a scaleable, reliable, secure interface for external as well as internal systems.

Then we will start to develop each of the four internal service interfaces, the Inventory Manager, Warehouse Manager, Order Manager, and Update Manager. Each of these applications can be accessed by internal systems or through an external interface to our business partners.

Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration

Old systems and new systems need to live together and most importantly communicate important business data with each another. However, programs do not always support and understand each other's data formats, communications protocols, and languages. In addition, some programs provide humans with a view of the system. These user interfaces are designed to be consumed by humans, not by other programs. Furthermore, programs that must communicate live in different organizations. How are we to integrate all of these systems?

In this section, I will discuss routes to accessing information services and the methods used to access them. Then we will develop the EDI and XML interfaces with our suppliers Good Old Soup Company and Hearty Soup Company, and the order interface for the stores.

Section 5, Extensions: Building Blocks for Extra Functionality

By now, we have created a system using the tools in Visual Studio .NET Enterprise Architect Edition. But we have just a basic system. What about security? What about attachments? What about updates? What about administration? We could develop these ourselves, but it would be nice if there was an alternative to custom development of these pretty standard pieces.

What we need is a concept of interchangeable parts for software development. This has been tried again and again with varying success. The C programming language came with a standard library (stdlib) of functions, such as printf and sscanf, that most C programmers gladly used rather than writing their own. Later, the Microsoft Foundation Class (MFC) for C++ development was made available to programmers working in an object-oriented Windows environment. Who wants to write a dialog box function if there is one available that works and does mostly what is needed?

In this section, I talk about the Web-service version of interchangeable parts. They take the form of standard extensions that are becoming available in the Web services universe. These extensions are part of Microsoft's Web Services Enhancements for Microsoft .NET (WSE). WSE extensions take the form of building blocks that can be integrated into a Web service quickly and easily. We will add attachments and security to our system to show how the building-block approach works.

Section 6, Going Live: Instrumentation, Testing, and Deployment

Once the architecture is designed and the code framework is created using Visual Studio, it is time to describe our plan for deployment and administration. In addition, there are several areas of implementation that need to be addressed before a robust, reliable, and secure architecture is deployed.

First, we need to develop a plan for "instrumentation." By placing "sensors" in our application, we can use instruments to provide a dashboard of our deployed system. Then we need to exercise the system in two areas, test and staging, before finally deploying the system in a production environment.

In this section, I detail a plan for exception and event management, and introduce the concept of "exception mining," which provides a method for wading through the information stream coming from the application to find events that need attention.

I hope you have enjoyed this section. I look forward to the next sections in this project, as we make FoodMovers come to life as a twenty-first century company, using the most modern IT tools on the planet!

About the author

Brian Travis is Chief Technical Officer and Founder of Architag International Corporation, a consulting and training company based in Englewood, Colorado. Brian is an expert in real-world XML implementations. Since founding Architag in 1993, he has created intelligent content management systems and e-business solutions for Architag clients around the world. Brian is also a noted instructor and popular lecturer in XML and related standards. In his role as principal instructor for Architag University, he has been teaching clients about XML in the U.S., Europe, Africa, and Asia.

Brian has lectured at seminars around the world and has written about XML, Web services, and related technologies. His most recent book, Web Services Implementation Guide, is a guide for IT architects and developers who need to integrate internal systems and external business partners. The book provides the basis for understanding the goals of Web services for the enterprise architect, project architect, deployment architect, developer, and manager. It outlines the steps an organization must take in order to align itself with the new world of Web services.