Section 2: Templates, Policies, Database, and Service Interface Design

 

Brian Travis
Architag International Corporation

November 2003

Applies to:
    Microsoft® Visual Studio® .NET Enterprise Architect Edition

Summary: This is the second section in a project dedicated to enterprise-scale design and development of a distributed application using Microsoft Visual Studio .NET Enterprise Architect Edition. In this section, you learn about the process of creating templates, policies, databases, and service interfaces. (36 printed pages)

To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.

Contents

The Project Team
Application Architecture
Database Architecture and Design
Designing Components
Designing a Service Interface
Physical Deployment and Operational Requirements
Conclusion

In the Section 1 of this project, you were introduced to the various architects in a modern enterprise. The enterprise architect is responsible for setting a vision for enterprise-wide IT solutions. You also met the project architect. This is the person who is responsible for designing and deploying enterprise-scale applications. The project architect understands the nature of the organization's business, and has the tools at hand, both personal (asking questions, listening, organizing, describing) and technical (Microsoft® Visio®, Microsoft Word, Microsoft Visual Studio®).

Section 1 discussed the design process, where the project architect determines the requirements for a given system, then documents it for review by all of the interested parties. Visio is the preferred tool for this set of tasks.

Once the project is approved, the project architect creates a template in Visual Studio .NET Enterprise Architect Edition, which provides a skeleton for the project. As part of the template development process, the project architect also creates a policy file, which codifies the development policies for the project. Finally, the project architect defines the data structures and creates the SQL database for the development process to begin.

In this section, you will learn about the process of creating templates, policies, databases, and service interfaces.

The Project Team

As you recall from the Section 1, the FoodMovers Distribution Company sits between suppliers of grocery items and small- to medium-size stores that use their products. FoodMovers is a typical distributor, or "middleman" that provides a service to both sides.

Our project contains seven processes that utilize four "manager" applications. Each of these manager applications will be created as a Web service, which can be accessed internally by legacy or new enterprise systems, or externally, by our suppliers and stores.

In order to get started, the project architect must structure the project in a way that individual developers can create code and test it. In our case, the development team consists of three programmers and the project architect.

The overall system, with areas of responsibility for each team, is shown in Figure 1.

Aa302163.foodmovers2_01(en-us,MSDN.10).gif

Figure 1. FoodMovers processes showing areas of responsibility for each team

  • Project Architect. The project architect, Pete, is the technical manager for the overall project. As a manager, he is responsible for interfacing between the programmers and the rest of the organization. Pete is not a developer, but, as a former database analyst (DBA), he understands database issues. Pete will be responsible for leading the team in developing all things in the database, including the creation of the tables and writing stored procedures that the rest of the team will use to access the data.
  • Warehouse Team. The warehouse team has developed several applications to manage the warehouse. Their task in this project is to build the Warehouse Interface project, which includes the shipping manifest and the Pocket PC interface for receiving shipments.
  • Store Team. The store team probably has the most diverse set of tasks. They are responsible for the OrderManager service interface, which must handle orders from stores as well as suppliers. They must also build human interfaces for the Web site and rich clients for internal use, plus the external Web service interface for supplier orders.
  • Mainframe Integration Group. The mainframe integration group was formed when it became apparent that systems were moving off of the mainframe into smaller, cheaper systems. The need for mainframe power did not go away, but the need to expose that power to other IT solutions became critical. For this reason, FoodMovers has been using Microsoft BizTalk® Server since it was released, and has used the product to integrate the mainframe with internal systems. The mainframe integration group must work closely with the project architect in order to assure a smooth development process.

Every member of each team will work on their own part of the project, testing each individual component. When the time comes, they will do their part in the overall system testing and debugging. Finally, they will be responsible for the deployment phase of their respective subsystems.

Application Architecture

Section 1 discussed how the project architect group designed the project by interviewing interested parties in the organization and its trading partners. Pete created the overall application architecture and outlined the business procedures using Visio.

Application architecture includes the structure of the project, the database design, and the policies and strategies for implementation stage, such as data access, coding strategies, and the overall look and feel of the project's interfaces. This section will discuss application architecture in three parts:

  1. Structuring of the project, which includes creating the enterprise templates and policies, and deployment of them onto each developer's environment
  2. Database architecture design, which includes designing and creating tables, fields, and views
  3. Component design, which includes the design of each service and interface component

Structuring the Project

So far, the project architect, Pete, has prepared the business workflows of FoodMovers for the business processes that will be implemented. Pete used Visio to create the business workflows and show them to people in the organization for comments. The business workflows provide guidance for the developers as they implement each part of the system.

As implementation progresses, developers will write code that turns the business workflow into executing programs. Pete must provide guidance to the developers to assure that their code is maintainable and scaleable. That guidance comes in the form of "enterprise templates" and "policies."

Pete first scribbled out the project structure on a piece of paper. The structure of the project is the bridge between his analysis of the enterprise requirements for the system and the implementation stages. However, putting the structure of the project on a piece of paper will not provide enough help and discipline for the developers.

Pete will prepare a special project, called a Visual Studio Enterprise Template. The enterprise template describes the overall structure of the system in skeleton form. Associated with the enterprise template is an XML file that describes the corporate policies concerning development. The policy file, working with Visual Studio, will enforce these policies.

If we continue our building analogy, then preparing an enterprise template and an associated policy file with it is like framing the house with lumber to form the skeleton. The next step is the implementation, where developers will finish the walls, hang sheetrock, install windows and doors, and paint the whole thing.

In contrast to having developers read specifications on paper, the advantage of using enterprise templates is that they are created and exist in the same environment where development will happen. This is an important factor, because the structure work done by the project architect is leveraged to the implementation phase.

Pete's notes for the overall structure of the FoodMovers project and policy considerations are shown in Figure 2. It shows what the FoodMovers project application architecture looks like. The extension for enterprise template is "etp." The C# project extension is "csproj" and the extension for a C# class is "cs."

Aa302163.foodmovers2_02(en-us,MSDN.10).gif

Figure 2. Pete's template and policy design worksheet

FoodMovers enterprise template is made up of C# projects that will form a set of C# class libraries.

  • BusinessLogic consists of classes that keep the business rules and checks for FoodMovers business services and processes:

    • UserLogic
    • OrderLogic
    • InventoryLogic

    In addition, there are proxy classes created from the service interfaces that expose Web services to internal and external processes. The proxy information is described below.

  • DataAccess has classes that implement data access strategy and architecture. This includes one class for each table in our database:

    • Categories
    • Inventory
    • Items
    • Stores
    • StoreOrders
    • Suppliers
    • SupplierOrders
    • Users

    These classes implement the code to connect and query their respective tables, as well as perform insert, update, and delete on rows in these tables. Having a data access class library makes a project much more robust and easy to maintain than programs incorporating data access with other functionality in one layer. The DataAccess classes create DataTable objects that are passed between components.

  • Common is a project that defines classes, structures, configuration variables, and constants used by all projects. The bulk of this project is the data classes. Visual Studio has the ability to define data as XML Schema (XSD) documents. We will be defining all of our database tables in terms of XSD, and let Visual Studio auto-generate the data structure classes.

    There are eight classes that correspond to the ten tables in the database. These classes define the data structures that will be used by the DataAccess project classes:

    • InventoryData.xsd
    • ItemData.xsd
    • StoreData.xsd
    • StoreOrderData.xsd (includes StoreOrders and StoreOrderItems tables)
    • SupplierData.xsd
    • SupplierOrderData.xsd (includes SupplierOrders and SupplierOrderItems tables)
    • There is also a class, Configuration.cs, which contains other public structures and constants such as connection string for the database, login policies or global variables.

    Think of each database as a water faucet. Querying our database is like turning on the faucet. When that happens, we need something to catch the water so that we can use it. The classes in the DataAccess project provide functionality to turn on the faucet and regulate its flow. The classes in the Common/Data are like the cups used to capture the water. Each named faucet in the DataAccess project has a named cup in the Common/Data class library. We access properties of the cups to get the field data.

In our architecture, you should consider business processes to be services. It doesn't matter whether they are exposed to internal legacy systems, internal Microsoft® Windows® applications or console applications, or externally to outside partners. They are all defined as services.

For interoperability and flexible implementation we will implement these business processes as Web services, which will encapsulate all business logic and data access classes. Then, any kind of application can access these Web services using whatever user experience or application connection in a consistent manner.

An enterprise template can inherit other enterprise templates. Visual Studio .NET Enterprise Edition ships with several enterprise templates that we are using:

  • WebServiceProjects is the template that will be used to build the four managers:

    • InventoryManager
    • OrderManager
    • UpdateManager
    • WarehouseManager

    These are the C# Web service projects that implement the QueryInventory, SupplierOrder, ItemMaintenance, and ReceiveOrders business processes respectively. The business services are collected in Web service projects. The user interfaces, such as mobile applications, Web applications and Windows applications are implemented to access these Web services.

    These projects create Web service applications that are accessible through Web Services Description Language (WSDL) files. At design time, an application that is calling these services adds a "Web Reference," which retrieves the WSDL and exposes the interface information in Visual Studio.

    The Microsoft .NET Framework has a tool, WSDL.EXE, that reads a WSDL file and creates a C# class that acts as a proxy to the Web service. I prefer to interface with the service this way, instead of using a Web Reference, because it provides a traditional class, rather than a special kind of reference. Any programmer who has experience accessing classes can use these without the need to learn a new interface.

    The proxy classes are generated with a batch program that calls WSDL.EXE once for each of the four services. The resulting proxy class is placed in the BusinessLogic/Proxy directory.

  • MobileApplications is a template that has the StoresMobile and WarehouseMobile projects. These are smart device application projects that utilize the Compact SDK in the .NET Framework. The StoresMobile project provides the interface and accesses the OrderManager Web service. The WarehouseMobile project provides the mobile interface that accesses the WarehouseManager Web service.

  • UtilityProjects is the template for applications like the console application that helps implement functionality for the other classes. The only process so far is the application that implements the SyncItemDB process. This is a console application project that will be called by a scheduler to run unattended at a particular time every night.

  • WebUIProjects is the template for Web applications that serve small stores. The Stores C# project implements the StoreOrder process. It can be viewed using a Web browser and implements ASP.Net user interface (WebUI) that accesses OrderManager Web service.

  • WinUIProjects is the template used to create WinUI clients that access Web services. The OrderInterface project is the user interface for the OrderManager Web service. ShippingManifest is a Windows application that implements the ShipToStores process and prints the shipping manifest. The WarehouseInterface is a Windows application that implements the internal ReceiveOrders process.

The external partners, such as Hearty Soup Co., have Web services that access OrderManager and UpdateManager Web services. But because they are XML Web services exposed using open international standards, it doesn't matter if Hearty Soup is using the IBM Web Services Toolkit, Sun's J2EE API, or any other applications that support Web services. As long as they adhere to the standards we have adopted (XML, SOAP, and WSDL), then they can connect to our services.

Web services are built around the idea that systems talk to each other by accessing "services," rather than accessing the applications or data directly. The beauty of this approach is that it has enough flexibility that a user interface can be developed just as easily, whether it is a local WinUI client or an external partner. In fact, all WinUI and WebUI interfaces in the FoodMovers system access these Web services to deliver a user experience. You'll learn about this in depth in Section 3, Developing Service-Oriented Architectures.

Deploying the enterprise template

Pete, our project architect, creates a Visual Studio project using the design he sketched out. The Solution Explorer now looks like the one shown in Figure 3.

Aa302163.foodmovers2_03(en-us,MSDN.10).gif

Figure 3. FoodMovers Enterprise Template application architecture

At this point, this view only specifies the FoodMovers overall application structure. In order to add functionality, Pete must add the project files he has determined are required. This is shown in Figure 4.

Aa302163.foodmovers2_04(en-us,MSDN.10).gif

Figure 4. FoodMovers Enterprise Template with projects and templates added

Once this is done, Pete has just an application structure—a layout with empty programs. He is now ready to convert this into a template and publish it to the developers' implementation environments. There are two ways of accomplishing this, both of which involve some precise steps:

  1. The template, along with the policy file, can be copied to each developer's Program Files\Microsoft Visual Studio.NET\EnterpriseTemplates directory. These steps are explained in Walkthrough: Creating a New Enterprise Template.
  2. The files can be placed on a central file server and shared from there. You still need to place the policy file and add a registry key on each developer's machine. The steps for this approach are explained in Enterprise Templates: Building an Application Construction Kit in Visual Studio .NET 2003

In most cases, the project architect will create a setup file that implements these steps automatically for each developer.

After these steps, the FoodMovers enterprise template becomes a built-in template when the developer wants to add a new project. This is shown in the "Add New Project" dialog box in Figure 5.

Aa302163.foodmovers2_05(en-us,MSDN.10).gif

Figure 5. The FoodMovers enterprise template is available as a template in the Add New Project dialog box.

The developers can now create a project with this template, just as they can create any other Visual Studio project.

The discipline language

But wait just a second! If Pete deployed this template to his developers right now, they would be able to add any type of project or class to their implementation. They could add, for example, an ASP.NET program that uses Microsoft Visual Basic®, or a console application that uses C++ or J#. They could include code written in VBScript imported from Visual Studio 6.0.

This would result in a project that reflects each individual programmer's desires and skills, but that is not necessarily the best for the organization.

In order to create a consistent, maintainable system, and to narrow the decision-making process for his developers, Pete creates a "policy file." A policy file is an XML document that enforces the enterprise's policy for creating applications.

Like enterprise templates, Visual Studio.NET Enterprise Architect Edition includes several policy files. Pete can use these as working examples to create his own.

Policy files are written using an XML vocabulary called TDL, or "Template Design Language." Since this policy file is the place where we instill a level of discipline on our programmers, I like to think of TDL as "The Discipline Language."

TDL is a raw XML document that you must edit using an XML editor such as XRay or the one built into Visual Studio. As of this writing, there are no tools that you can use to create and maintain this file, but the complete documentation can be found in the Template Description Language Reference.

To illustrate a small example of what TDL can do, consider Pete's desire to create a policy for the WebServicesProjects template. He wants to allow developers to add only C# Web service projects under this template.

The code snippet of the TDL file (showing policy restrictions) that concerns the WebServicesProjects template is shown below.

<ELEMENT>
   <ID>etpWebServiceProjects</ID>
   <IDENTIFIERS>
      <IDENTIFIER>
         <TYPE>PROJECT</TYPE>
         <IDENTIFIERDATA>
            <NAME>GLOBAL:TDLELEMENTTYPE</NAME>
            <VALUE>etpWebServiceProjects</VALUE>
         </IDENTIFIERDATA>
      </IDENTIFIER>
   </IDENTIFIERS>
   <PROTOTYPES>
      <PROTOTYPE>[EF]\Projects\FoodMovers\WebServiceProjects\
         WebServiceProjects.etp</PROTOTYPE>
   </PROTOTYPES>
   <ELEMENTSET>
      <DEFAULTACTION>EXCLUDE</DEFAULTACTION>
      <ORDER>INCLUDEEXCLUDE</ORDER>
      <INCLUDE>projInventoryManager</INCLUDE>
      <INCLUDE>projOrderManager</INCLUDE>
      <INCLUDE>projUpdateManager</INCLUDE>
      <INCLUDE>projWarehouseManager</INCLUDE>
      <INCLUDE>projCSharpWebService</INCLUDE>
      <INCLUDE>projCSharpProject</INCLUDE>
   </ELEMENTSET>
</ELEMENT>

The ID element specifies the identifier of this node of the template structure. Other policy rules will refer to this template as etpWebServiceProjects The PROTOTYPE element provides the relative location of this ELEMENT in this directory structure. [EF] refers to the EnterpriseFrameworks directory.

The TYPE element contains a unique value that can be used as a reference key to allow the IDENTIFIER to be defined one time and then referred to by this value from elsewhere.

The IDENTIFIERDATA element specifies a name/value pair that refers to the specific project identified as an ELEMENT elsewhere in the policy file. This identifier is available globally in the policy file.

DEFAULTACTION and ORDER specify the criteria necessary to restrict users from adding project types as child nodes of WebServiceProjects. If the DEFAULTACTION element is set to "INCLUDE," then this node can include any elements and projects that are defined in the template. However, Pete has opted to set the value to "EXCLUDE," so that only the project types defined in the ELEMENTSET will be allowed in the node.

The WebServiceProjects template in Pete's enterprise template has an InventoryManager project that is identified in the policy file with the id projInventoryManager. Other managers are named similarly. All of these projects can be included in the WebServiceProjects template. These are templates that we created.

Notice, also, that projCSharpWebService and projCSharpProject are included in the ELEMENTSET element. These are projects that are defined in most enterprise templates, and are also defined in the policy file, as shown below.

...
<ELEMENT>
   <ID>projCSharpProject</ID>
   <IDENTIFIERS>
      <IDENTIFIER>
         <TYPE>PROJECT</TYPE>
         <IDENTIFIERDATA>
            <NAME>FileExtension</NAME>
            <VALUE>.csproj</VALUE>
         </IDENTIFIERDATA>
      </IDENTIFIER>
   </IDENTIFIERS>
</ELEMENT>
...
<ELEMENT>
   <ID>projCSharpWebService</ID>
   <PROTOTYPES>
      <PROTOTYPE>[VC#]\CSharpProjects\CSharpWebService.
            vsz</PROTOTYPE>
   </PROTOTYPES>
</ELEMENT>

Notice that projCSharpWebService contains a PROTOTYPE of [VC#]\CSharpProjects\CSharpWebService.vsz. This is a Visual Studio Wizard file that starts a wizard. To give you an idea of what the wizard looks like, it is listed below.

VSWIZARD 7.0
Wizard=VsWizard.VsWizardEngine.7.1
Param="WIZARD_NAME = CSharpWebServiceWiz"
Param="WIZARD_UI = FALSE"
Param="PROJECT_TYPE = CSPROJ"

When a C# Web service project is selected, this wizard is invoked, helping the programmer select an appropriate Web Reference and name the project.

The TDL file needs to reside in the programmer's policy directory, which is inside the Visual Studio.NET program file structure.

Let's look at the results of this policy snippet in practice. Let's say that a member of the Warehouse Team is working on the WebServiceProjects template, implementing the projects. When she adds a new project under the WebServiceProject template, the policy file allows her to add only Web services projects. The available projects that she can add are shown in Figure 6. The policy kicks in, and as a result, she can add only ASP.NET C # Web services.

Aa302163.foodmovers2_06(en-us,MSDN.10).gif

Figure 6. Available projects to add for WebServicesProjects

Summary

After analysis and paper design of the system, Pete will design the application architecture as a normal Visual Studio.NET Enterprise Architect Edition project. Then he will create a policy file that restricts developers by instantiating corporate policies into a machine-readable file.

After he has the template and policy files, Pete must install everything on each developer's machine so they can instantiate the template to create their system.

Database Architecture and Design

Besides creating the enterprise template and policy files, the project architect is usually also responsible for creating the database architecture and design. The design he came up with is shown in Figure 7.

Aa302163.foodmovers2_07(en-us,MSDN.10).gif

Figure 7. Database table design for FoodMovers system

Pete uses Visio to design the database structures. The first thing they tell you in Visio school: "Visio is not a drawing tool." While Visio is great for drawing, its main purpose is modeling. Microsoft bought Visio in 1999 because the product has been proven as a great tool for creating business process flows and database designs. Since then, Microsoft has focused on updating the product's look and feel so it is familiar to users of Microsoft's products. They have also integrated the Visio code into other products, including BizTalk Server.

While Pete could have used the Enterprise Manager in Microsoft® SQL Server™ to create the database tables and views, he used Visio instead, because of its ability to document the data design as it creates the data structures. Visio can also generate the SQL database tables by connecting to a SQL Server. As I have mentioned before, Visio tools are for logical data modeling and SQL Server tools are for physical data modeling. However, Visio creates a good template of a physical database to work with. It converts this logical data model into some physical model that we can use to work with SQL Server tools and obtain our physical data model.

When Pete designed his database and tables, he defined all of the columns, primary keys, indexes, views, and the relationships between the tables by defining the foreign keys. Visio for project architects provides tools to define all of these, and makes database design embarrassingly easy. Of course, Pete doesn't tell anyone how easy it is.

Figure 8 shows the database design in Visio. The "Shapes" window helps to draw the database design and the "Code" window allows Pete to include SQL scripts—such as stored procedures and functions—in the design. The "Database Properties" window allows Pete to add the column and data type, trigger, key, and any index information related with the tables.

Aa302163.foodmovers2_08(en-us,MSDN.10).gif

Figure 8. The database design in the Visio Architect Tool

The arrows show relationships, starting from the child table and pointing to the parent table. PK indicates a primary key and FK indicates foreign keys.

Pete then uses the Database. . .Generate function to create the database and all tables, fields, and views using an ODBC connection. Using an ODBC connection means Visio supports not only SQL Server but also Microsoft® Access and any database that supports an ODBC connection.

The Visio Database. . .Update and Database. . .Generate create a SQL script with commands that translate the database design, then runs it against the database server.

Using Visio to design and generate databases allows the database architecture and design to be kept in one environment, while the actual data can be in different and distributed sources.

Figure 9 shows the generated database tables and views in Visual Studio Server Explorer.

Aa302163.foodmovers2_09(en-us,MSDN.10).gif

Figure 9. SQL Server Query Analyzer shows the tables and views created.

The project architect can optionally enter in some sample data for the implementation and tests that will be performed later on.

Stored procedures

The project architect can create stored procedures, functions, or any SQL script using the "Code" menu option in the database tab. In the body of the global code box, any SQL script can be written. However, Visual Studio.NET Enterprise Architect Edition provides a more manageable tool for creating the stored procedures and functions. In addition to that, I believe creating stored procedures and functions is a part of implementation.

We will make it a developer's work to create stored procedures. We will create stored procedures as we implement each business processes.

Figure 10 shows the Visual Studio.NET Enterprise Architect Edition's support for SQL Server.

Aa302163.foodmovers2_10(en-us,MSDN.10).gif

Figure 10. Visual Studio.NET Enterprise Architect Edition support for SQL Server for developers

When the developer chooses Server Explorer from the View menu, the SQL Server can be navigated, which shows the database tables, fields, indexes, stored procedures, and anything else that is managed by SQL Server.

You'll learn about the stored procedures in the later sections of this project.

Data access strategy

The FoodMovers project is a distributed application. There are different data resources involved. Some of the data is kept in COBOL, while the new applications use SQL Server. One of the questions we must answer is: which data access strategy will FoodMovers implement?

There are many things to consider in finding the right data strategy. Some of these are:

  • Data access methods: Do we access database tables directly, or through functions or stored procedures? Keeping business logic in modules makes the system easier to update, change, and manage.
  • Minimum data resources: The DataAccess project classes access data through stored procedures and views. Each of these classes initiates and disposes the objects after use.
  • Data interoperability: This is important when data resides in different formats—such as SQL server, Oracle, flat files, or DB2—and must be accessed from various applications and projects.
  • Synchronous and asynchronous processes: The data that resides in the mainframe usually cannot be accessed synchronously. As a result, FoodMovers keeps a read-only copy of the database in the SQL Server for synchronous access that can be presented in Web pages, Web services, and any user interfaces in a synchronous manner.
  • Stateless connections: Using stateless connections ensures scalability of the project. The distributed application model usually requires a stateless protocol. For example, in the FoodMovers application structure, the DataAccess project classes access stored procedures, tables, and views, which themselves are called by a higher-level class or an application. Data is stored in DataTable objects, which allows SQL Server to disconnect immediately.
  • Transaction Support: Most of our business data requires transaction support. Objects that perform write operations are the most likely candidates for transactions. Transactions can be implemented in the stored procedures and functions. All programs must deal with transaction messages coming back from the stored procedures.
  • Securing the database connection: In our architecture, database access is performed by a handful of classes. Sensitive information, such as usernames and passwords, can be embedded in classes; but a better way is to keep all such sensitive information in a single place, encrypted. This is the purpose of our Configuration.cs class. The best security in a database application is accomplished by using stored procedures and functions. Having classes call a stored procedure or a function is much more secure than a having a user make a SELECT or INSERT statement from the presentation layer.

Mainframe integration

FoodMovers applications are built in a Microsoft environment. However, they require important business data from the mainframe. How is this resolved? The connection from the Microsoft environment to mainframe flat files is maintained by BizTalk Server. You'll see how this integration is done in Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration.

For now, it is important to note that our data strategy isn't affected by how the mainframe stores its data because mainframe integration will be handled by BizTalk Server.

FoodMovers data strategy has only one relational database, SQL Server, which makes data integration easier. If we had to deal with Oracle, Sybase, or ODBC-enabled databases, then our data strategy would need to embrace these relational databases, too.

Distributed application considerations

An economist would call a central data store a "scarce resource." That is, there is less of it available than demand usually requires. Another example of a scarce resource is an astronaut. When NASA, the U.S. Space Agency, sends humans into the far beyond, there are many people on the ground who would have that astronaut pay attention to their requests. However, if the poor astronauts were to be bombarded with requests coming from many different voices, they would be overwhelmed.

For this reason, there is a single person (CapCom, for Capsule Communicator) at Mission Control who is tasked with communicating with the astronauts. Every request from the ground is funneled through that person, who takes care to make sure the request is worthy of the astronauts' attention, while keeping in mind the stresses they are under.

In our distributed application architecture, many applications need to get the attention of the common data stores. Each data store (usually a table in SQL Server) is accessed with a single class that is optimized for getting data in and out of that store. This allows us to create a global philosophy in accessing our data, and makes maintenance easier. In order to minimize contention and maximize throughput, it is important to create and dispose of data objects carefully. Get in, get the data, and get out.

Our system uses classes to access the data, which, like Mission Control, provides a single point of contact for the data sources. This provides several advantages:

  • Code can be optimized for efficiency or speed.
  • Requests for data can be prioritized in case of a conflict.
  • Future code management is minimized because implementation is modularized.

The data access classes are shown in Figure 11.

Aa302163.foodmovers2_11(en-us,MSDN.10).gif

Figure 11. FoodMovers Project handles data access with classes that modularize the project for management and makes a scalable system

The WebServices projects (manager applications) that encapsulate and describe FoodMovers' business services access BusinessLogic classes. These BusinessLogic classes access DataAccess classes in order to connect to the SQL Server database. This is distributed data access, a manageable and scalable data access strategy.

What kind of connection?

At some point, Pete needs to consider what kind of data connection is the best for the FoodMovers system. If our system had databases from different vendors, one solution is to use OLE DB to access them. OLE DB has drivers for many different vendors' databases. There are data providers for SQL Server, Oracle, for DB2, for Microsoft Access, and many others. The OLE DB model is shown in Figure 12.

Aa302163.foodmovers2_12(en-us,MSDN.10).gif

Figure 12. OLE DB data provider stack

An OLE DB data source is accessed through an OLE DB Provider written for that particular brand of database. The Provider interfaces with an OLE DB Service Component, which provides a common interface to the database-specific driver. The .NET Framework has an interface to this Service Component to allow programs to access the database.

Needless to say, there is a lot of translation going on in this model. The nice thing about the OLE DB stack is that you can integrate many different brands of database with a single interface to your program. The downside of this approach is that it requires many layers, and performance suffers as a result.

Since we have decided to use only SQL Server, our decision becomes a bit easier. The .NET Framework defines a special SQL Server .NET Data Provider, which makes a direct connection to SQL Server. This is illustrated in Figure 13.

Aa302163.foodmovers2_13(en-us,MSDN.10).gif

Figure 13. SQL Server .NET Data Provider is faster than OLE DB .NET Provider that has many layers for access.

The SQL Server .NET Data Provider is used for connecting to the database, executing commands and retrieving results. The results can either be read directly (the DataReader class) or placed in an ADO.NET DataSet object (DataAdapter class) for read/write access.

Pete chose the ADO.NET DataSet object for the following reasons:

  • ADO.NET has disconnected access to data. ADO.NET realizes that the database is a scarce resource, so it makes a quick connection to the database, gets the results and places them in an in-memory structure, then releases the connection. This allows the database to be free for the next access.
  • ADO.NET provides a federated view of all data. All data in the ADO.NET layer results in an XML-based in-memory structure. It doesn't matter where data comes from, whether from an SQL database in the machine next door, or a Web service half a world away. Data is data, and it Is all expressed in a uniform way.
  • ADO.NET is built on the .NET Framework. The data access technology is uniform. All classes are native, managed code running in the .NET Framework common language runtime environment.

ADO.NET architecture

ADO.NET includes a .NET Data Provider for connecting to a database, executing commands, and retrieving results. It consists of two main classes:

  • .NET Data Provider
  • DataSet classes.

The ADO.NET architecture is shown in Figure 14.

Aa302163.foodmovers2_14(en-us,MSDN.10).gif

Figure 14. ADO.NET data access architecture

ADO.NET works by creating an in-memory structure on the machine running the program. This allows the program to access the data while disconnected from the database. It also allows the programmer to specify a structure that is optimized for the program, rather than working with it in the form that it appears in the database.

Creating an ADO.NET infrastructure requires a few steps:

  1. First, the connection must be made to the database.
  2. Next, the structure of the DataTable object must be created.
  3. Then, the relationship between the SQL data tables and the DataTable object must be specified. This is done with a map.
  4. Finally, the DataTable object is loaded by executing a query or a stored procedure on the database.
  5. After that, the stage is set for accessing the database from our program.

That covers reading the data. What if we need to make changes to the SQL Server data tables? To do this, we just change the value of the nodes in the DataTable object and issue an update to the SQLDataAdapter object. It will take care of updating the databases by issuing pre-specified SQL commands or stored procedures that perform INSERT, UPDATE, or DELETE commands.

The .NET Data Provider components SQLCommand and SQLConnection provide read/write access to data. The SQLDataAdapter object provides the bridge between the .NET DataProvider and the DataSet object by creating a map, then issuing the "Fill" command. This populates the DataSet to match the data in the data source.

Given the appropriate connections and maps, the DataSet object can combine data from multiple sources, thus providing uniform data processing. Data can also come from OLE DB or XML.

From strategy to code in DataAccess

The steps above could be cumbersome for each programmer to create, so we will create a set of classes that do the heavy lifting. These are the classes mentioned earlier in the DataAccess and Common projects.

1. Connect to the database

The Common project has classes that define data structures that FoodMovers will use in mapping the data sources to DataSet data. In the code fragments below, the Configuration.ConnectionString is used:

public const string strConn =
   "Data Source=FOODMOVERS-MAIN;" +
   "Integrated Security=SSPI;" +
   "Initial Catalog=FoodMovers;";

2. Create DataTable object

Let's take, for example, one of the tables in the database and see how the classes access the data.

The dtInventory structure is defined in the class Common/DataFolder/InventoryData. This is a public class definition that holds the data in the format that the project requires. InventoryData is the DataSet that can be made of one or more DataTables that are defined in the same class.

The code below creates the dtInventory structure, which holds the data from the Inventory table, and then builds the table adding columns to this definition structure. dtInventory is our DataTable object:

DataTable InventoryData = new DataTable("Inventory");
DataColumn myDataColumn;

myDataColumn = 
   new DataColumn("UPC", typeof(System.String));
InventoryData.Columns.Add(myDataColumn);
myDataColumn = 
   new DataColumn("Quantity", typeof(System.Int32));
InventoryData.Columns.Add(myDataColumn);
myDataColumn = 
   new DataColumn("ShelfAddress", typeof(System.String));
InventoryData.Columns.Add(myDataColumn);
...

Many DataTable structures form a DataTable Collection, which forms a DataSet. And the DataTables are defined with columns, rows and constraints.

Remember my discussion of faucets and cups? The dtInventory structure is the cup, in the form of a DataTable object. These classes can be used to access data that is queried from the database, but they can also be used to update the data in the database as described above.

Creating these classes can be cumbersome, particularly if the table comes from several different sources. Fortunately, Visual Studio provides a method for creating DataTable classes using XSD schemas. By adding an XML schema item to our project, then dragging the table from the server view to the XSD layout screen, we create an XSD schema, as shown in Figure 15.

Aa302163.foodmovers2_15(en-us,MSDN.10).gif

Figure 15. Creating an XML schema directly from the database

From this schema, we can auto-generate a DataSet class that contains all of the DataTable objects we need. This class is then referenced wherever data structure is needed. If we need to change the structure, we just change the schema, and the DataSet class is re-generated.

3. Create map

The initialization of the DataSet takes place in the DataAccess project, Inventory class. This is shown in the code sample below.

public Inventory()
{
   dsCommand = new SqlDataAdapter();
   dsCommand.SelectCommand = new SqlCommand();
   dsCommand.SelectCommand.Connection = new 
      SqlConnection(FoodMovers.Common.Configuration.strConn);

   DataTableMapping InventoryMap = 
      dsCommand.TableMappings.Add("Inventory", "Inventory");
   InventoryMap.ColumnMappings.Add("InventoryID", "ID");
   InventoryMap.ColumnMappings.Add("UPC", "UPC");
   InventoryMap.ColumnMappings.Add("Quantity", "Quantity");
   InventoryMap.ColumnMappings.Add("ShelfAddress",
      "ShelfAddress");
}

First, a DataSet command is instantiated as an SQL data adapter object. A connection is established to the database using the ConnectionString described above.

The data will be accessed by stored procedures, as we will see shortly. The record set that comes back from the stored procedure has data with field names. We need to show how these fields are mapped, by name, to fields in our DataTable object.

Once the connection is established, we need to indicate how the results that come back from a stored procedure will be mapped to names in the fields we added in the DataTable object. Three maps are added to the ColumnMappings object. UPC in the SQL database maps to UPC in our DataTable object. The field Quantity in the SQL database maps to QtyInStock in our DataTable, and ShelfAddress maps to ShelfLocation. These maps are given a name, MyInventoryMap, which will be used when we fill the DataTable object in the next step.

It is important to note that the mapping definition allows the SqlDataAdapter to create the table and its columns directly in the DataSet object.

As a result, we now have a DataTable map named dtInventory from the actual Inventory table in the SQL database. You can think of dtInventory as the structure of the cup that holds our data, which is created as a result of running the map.

4. Load DataTable object

Now that all of the DataTable creation and mapping is defined, developers can fill the object for easy access. Because of our class definitions, it is a simple task of invoking the FillInventoryData method, passing the name of a stored procedure and appropriate parameters.

The code below shows how to access a stored procedure using DataAdapter and DataSet.

public InventoryData GetInventory(string UPC)
{
   if (dsCommand == null ) 
      throw new
         System.ObjectDisposedException(GetType().FullName);

   InventoryData data    = new InventoryData();
   SqlCommand command = dsCommand.SelectCommand;
   command.CommandText = "GetInventory";
   command.CommandType = CommandType.StoredProcedure;
   command.Parameters.Clear();
   SqlParameter param = 
      new SqlParameter("@UPC", SqlDbType.NVarChar, 255);
   param.Value = UPC;
   command.Parameters.Add(param);
   dsCommand.MissingMappingAction = 
      MissingMappingAction.Error;
   dsCommand.MissingSchemaAction = 
      MissingSchemaAction.Error;
   dsCommand.Fill(data,"Inventory");
   return data;
}

Notice that the Fill method of the dsCommand object indicates the name of the map that we created earlier.

The InventoryData class will now be filled with the results from the stored procedure GetInventory.

We will create similar code for each data table in our database.

Once the DataTable objects are defined, we need to write other data-oriented methods that are exposed publicly to other programs. Each of these methods fills a DataTable object, and then returns the appropriate information from the object.

5. Access the data from a program

Now, when a programmer wants to have access to the data, he or she just needs to invoke this method:

Inventory accInventory = new Inventory();
InventoryData datInventory = new InventoryData();
datInventory = accInventory.GetInventory(strUPC);

Designing Components

After designing the database, Pete must design the components of the system. This includes the methods, inputs and outputs, and functionality of each component.

Pete will be specifying the component design as a UML document. UML, the Unified Modeling Language, is a software industry standard used to design software systems and components that make up these systems. The requirement is to produce the "blueprints" for the development phase.

UML diagrams are used to design all aspects of the software systems from the database design to the interaction of the components.

Visio gives the support to the architects designing the software systems by providing different UML diagram types and tools for these diagrams. Some of these UML diagrams are:

  • Static structure diagram: Defines attributes, operations, and other properties of a software component and associates these components with each other by defining the relationship between these components, such as classes, packages, and so on.
  • Collaboration diagram: Defines the relationships between the components' roles and associates these roles by creating relationships between the users and the user interfaces.
  • Sequence diagram: Defines the sequence of the components, and what order they will be invoked. It looks like a workflow diagram. However, this workflow defines only the how the components will be invoked rather than a business workflow that embraces the user interaction as well as the components.
  • Component diagram: Defines a high-level view, specifying packages, components, and nodes, along with their relationships.
  • State chart diagram: Defines the system's states and transitions between these states as events happen.
  • Deployment diagram: Defines the physical high-level picture of the server and client applications and the communications between them.
  • Activity diagram: Defines the action states, the sequence of the actions, and which package and program these actions act on.
  • Use-case diagram: Defines the high level interaction between the users and the system indicating how they will use the system.

The Visio UML static structure also provides a code structure to the developers to start with. The Model Explorer view of the Order Manager Service is shown in Figure 16.

Aa302163.foodmovers2_16(en-us,MSDN.10).gif

Figure 16. Order Manager Service Static Structure UML Diagram

The Order Manager Service and its components and operations are defined using the UML Static Structure Diagram. Everything about this class, from its name to the operations and the parameter types its operations require and the target language of the code and where it will be implemented, is designed using Visio.

Visio can also generate reports to further document the system.

At the end of the UML design task, Visio generates the code with the components that are defined in the UML, using the language that is preferred.

Figure 17 shows the dialog box of the UML code generation.

Aa302163.foodmovers2_17(en-us,MSDN.10).gif

Figure 17. Generate Code for the Warehouse Manager

This will create a project with all of the classes as C# programs.

Designing a Service Interface

Service interface designs are required in order to identify the internal or external exposure of the services and what kind of extensions are needed for each type of interface. For example, one service, such as Order Manager, has both an internal and external accessibility. Therefore, each interface requires a different security and authentication mechanism and commitment. These commitments are defined using a Service Level Agreement (SLA).

An SLA includes the following items:

  • Business parameters passed into and out of a service, including any schemas that defines these parameters
  • Examples of documents that are fed in to a service as parameters or returned from a service as a response
  • Access methods (synchronous or asynchronous) used to invoke the service access.
  • Any extensions that are necessary to access the service, such as the communication protocol that will be used or the security mechanisms required

Some services can have multiple interfaces that are published for the same service. In this case, authentication, service interfaces, communication channels, and transactional definitions of these different interfaces usually will be different. Each of these interfaces and extension requirements must be designed before development begins.

A graphical example of the service level agreement for the order manager service is shown in Figure 18.

Aa302163.foodmovers2_18(en-us,MSDN.10).gif

Figure 18. Service Level Agreement for Order Manager service

Here are some questions for designing services and SLAs:

  • Will the services offer synchronous or asynchronous access?
  • Are they consumed internally or externally? If internal, will .NET Remoting be used?
  • What kind of authentication and security mechanisms will be implemented?
  • Will they be exposed to different protocols other than HTTP such as message queues?
  • When the services change, how will the interfaces be affected? This will affect the loosely coupled architecture. The interface design must stress to assure that interfaces do not get affected when service internals change.
  • How are the error and exception handling mechanisms affected? Are there transactions defined that involve services?
  • Are the errors and exceptions at the protocol layer carried to the applications? What kind of code is needed to accomplish this? For example, what errors that happen in the HTTP layer need to be carried to the application for error and exception handling?

Physical Deployment and Operational Requirements

I have discussed the logical architecture and design of the FoodMovers project until now. Another important facet of the architecture and the design is the physical deployment and operational requirements. The physical deployment is mainly determined by how the logical project tiers interact with each other and what kind of security is required for each component, but can also take into consideration aspects like load balancing and clustering.

For example, the machines that have business logic components will be deployed behind two firewalls for security. The machines that face the user requests—components that use IIS—will be deployed in the DMZ, which is the area behind the first firewall but between the first and the second firewall.

Figure 19 shows how the physical deployment will take place.

Aa302163.foodmovers2_19(en-us,MSDN.10).gif

Figure 19. FoodMovers ProjectPhysical Deployment

The components that operate as stateless (IIS components) will be load-balanced to ensure that traffic will be equally divided so the machines are up and the waiting time for users is minimized.

The components that are aware of state, such as SQL Server and BizTalk Server, will be clustered to ensure that any given time there is a server available and the state is tracked. If there is a problem with the first machine, the second machine takes over. However, because the states are kept in disk arrays that both of the machines can access the state is not lost. The transaction and state is taken where it is left. This is ensured by the disk arrays.

The SQL Servers in the data source are clustered for performance and fault tolerance.

The thin clients can access the Web farm through a firewall by using HTTP. SSL can also be applied when a secure connection is needed. The Web farm hosts ASP.NET pages and other UI and UI process components. Because the business layer communicates with the presentation layer by using SOAP messages, SOAP messages only require port 80 to travel through the firewall.

The rich clients are deployed in the intranet but we have placed firewall protection internally for extra security. FoodMovers business logic is exposed internally and externally using services. Therefore, the rich clients get access to data sources through the services. Rich clients can access the service's functionally by using SOAP messages.

The interaction between external services and client is purely via SOAP messages. The application farm that hosts XML Web services, message queue listeners, and other service interfaces are exposed through the Web farm. External clients and services calling FoodMovers services can reside outside of the firewall. The traffic will to be controlled and only needed ports will be opened.

What about internal service-to-service integration? This occurs in the business layer. There are no firewalls or any other security protections.

You'll learn more about deployment in Section 6, Going Live: Instrumentation, Testing, and Deployment.

Conclusion

In this section, you were introduced to the FoodMovers project management team. Each team member is responsible for part of the project, but is also responsible for assuring that his or her part integrates with the other parts.

This integration is made easier by the fact that the data access is managed by easy-to-use classes, and that all business logic is composed of reusable components called Web services. These Web services will be used to connect internal systems like the WinUI clients, as well as external systems, like the suppliers and the stores.

Pete, the project architect, is also heavily involved in the implementation phase of this project. He is responsible for architecting the stored procedures that the rest of the team will write and use to access the data tables. This is a pretty common position for the project architect, who probably has training in database design and execution, but might not have any procedural coding experience. Being in charge of the data also allows Pete to stay close to the project to assure that it completes successfully.

In the following sections, you'll learn about each of these systems and see sample code to make them all work. (To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.)

These sections include:

Section 3, Developing Service-Oriented Architectures

Getting legacy applications to work together is one of the most time-consuming and expensive areas for the modern IT department. In the past 40 years, many different systems have been created that do not play nice with each other. Getting these systems to talk is critical if an organization is to expose its internal business processes to internal departments as well as trusted external partners.

In this section, you'll learn about the concept of service-oriented architectures (SOA), which provides a scaleable, reliable, secure interface for external as well as internal systems.

Then we will start to develop each of the four internal service interfaces, the Inventory Manager, Warehouse Manager, Order Manager, and Update Manager. Each of these applications can be accessed by internal systems or through an external interface to our business partners.

Section 4, Legacy and Business Partner Integration: Using Service-Oriented Architecture for Integration

Old systems and new systems need to live together and most importantly communicate important business data with each another. However, programs do not always support and understand each other's data formats, communications protocols, and languages. In addition, some programs provide humans with a view of the system. These user interfaces are designed to be consumed by humans, not by other programs. Furthermore, programs that must communicate live in different organizations. How are we to integrate all of these systems?

In this section, you'll learn about routes to accessing information services and the methods used to access them. Then we will develop the EDI and XML interfaces with our suppliers Good Old Soup Company and Hearty Soup Company, and the order interface for the stores.

Section 5, Extensions: Building Blocks for Extra Functionality

By now, we have created a system using the tools in Visual Studio .NET Enterprise Architect Edition. But we have just a basic system. What about security? What about attachments? What about updates? What about administration? We could develop these ourselves, but it would be nice if there were an alternative to custom development of these pretty standard pieces.

What we need is a concept of interchangeable parts for software development. This has been tried again and again with varying success. The C programming language came with a standard library (stdlib) of functions, such as printf and sscanf, that most C programmers gladly used rather than writing their own. Later, the Microsoft Foundation Class (MFC) for C++ development was made available to programmers working in an object-oriented Windows environment. Who wants to write a dialog box function if there is one available that works and does mostly what is needed?

In this section, you'll learn about the Web service version of interchangeable parts. They take the form of standard extensions that are becoming available in the Web services universe. These extensions are part of Microsoft's Web Services Enhancements for Microsoft .NET (WSE). WSE extensions take the form of building blocks that can be integrated into a Web service quickly and easily. We will add attachments and security to our system to show how the building-block approach works.

Section 6, Going Live: Instrumentation, Testing, and Deployment

Once the architecture is designed and the code framework is created using Visual Studio, it is time to describe our plan for deployment and administration. In addition, there are several areas of implementation that need to be addressed before a robust, reliable, and secure architecture is deployed.

First, we need to develop a plan for "instrumentation." By placing "sensors" in our application, we can use instruments to provide a dashboard of our deployed system. Then we need to exercise the system in two areas, text and staging, before finally deploying the system in a production environment.

In this section, you'll see a plan for exception and event management, and be introduced to the concept of "exception mining," which provides a method for wading through the information stream coming from the application to find events that need attention.

About the author

Brian Travis is Chief Technical Officer and Founder of Architag International Corporation, a consulting and training company based in Englewood, Colorado. Brian is an expert in real-world XML implementations. Since founding Architag in 1993, he has created intelligent content management systems and e-business solutions for Architag clients around the world. Brian is also a noted instructor and popular lecturer in XML and related standards. In his role as principal instructor for Architag University, he has been teaching clients about XML in the U.S., Europe, Africa, and Asia.

Brian has lectured at seminars around the world and has written about XML, Web services, and related technologies. His most recent book, Web Services Implementation Guide, is a guide for IT architects and developers who need to integrate internal systems and external business partners. The book provides the basis for understanding the goals of Web services for the enterprise architect, project architect, deployment architect, developer, and manager. It outlines the steps an organization must take in order to align itself with the new world of Web services.