Bb402962.skyscrapr_banner(en-us,MSDN.10).gif

Implementing System-Quality Attributes

 

Gabriel Morgan
Microsoft Corporation

March 2007

Summary: Build high-quality software, leverage industry practices, and plan to build quality into your solution; but be sure to prioritize carefully. (9 printed pages)

Contents

Introduction
Planning for Implementation of System Quality
Review
Critical-Thinking Questions
Sources

Introduction

When I was a teenager, my friend's father gave him the spare family station wagon as his first car. It was an old, decrepit Ford LTD station wagon that was not exactly the type of car that anyone of our age would flaunt to attract the type of attention that we were after. But it was the only car that either of us had, so it was just fine—that is, after we had fixed it up a bit, of course. We decided to turn it into something that had the aesthetics of a roadster by raising the rear wheels, replacing the carburetor with a set of after-market dual carburetors, replacing the aged seats with five-point harness-racing bucket seats, and giving it a new coat of paint.

During that experience, I remember leafing through loads of catalogs that contained descriptions of parts from various after-market manufacturers, all of which matched the published Ford LTD station-wagon specifications for the 454 Cleveland V8 engine, interior seat mounts, and rear suspension brackets needed for the car's conversion. We chose what we could afford and installed them ourselves, following simple installation instructions. We didn't have to rebuild anything, or force-fit any of the parts or fabricate any custom mounts to reuse that old station wagon for our new purposes. At the time, this event meant nothing to me other than a bit of fun over a couple of weeks during our summer break.

Now, let me tell you a very different story surrounding a very similar event. I once delivered a software solution for a customer who was a market leader in the online retail industry and primarily focused on selling computer goods. The solution consisted of a set of systems that automated order acceptance through fulfillment business processes. The customer was thrilled the day that the solution went into production. To him, it meant that he could reduce the costs by automating much of the manual activities for these processes, and direct his focus on empowering his marketing group.

Wahoo! Drinks for everyone—until a couple of years later.

The customer's industry became increasingly competitive. To survive, it was clear that he needed to diversify into several lines of retail goods, outsource billing services to lower costs, lower his prices by comparing multiple suppliers' price offerings, and improve customer satisfaction by committing to delivery dates by partnering with multiple shipping partners. He hoped to win new customers and improve customer satisfaction.

But the systems that I had developed were not built to support this type of change in the business easily. Nor were the systems designed to be compatible with the flurry of IT packages that hit the market and provided specialized components for system integration, customer care management, faster supplier enablement, marketing-campaign management, and so on. I ultimately had to redesign parts of the architecture, which took considerably more time than it would have taken had I initially designed for change in the first place. In the end, the systems required a major overhaul to meet the needs of the business.

Let's go back to my story of converting my friend's station wagon (and repurposing it for the needs of two teenagers) versus my experience converting my customer's online retail systems (to support multistore channels and multi-supplier relationships, and provide for more sophisticated functionality). The two experiences were conceptually similar scenarios, but far different in execution. A few years ago, I began to think about this and ask myself a few questions. Would it have been possible for the original online retail system to be more resilient to changes in business processes, as well as changes in IT technologies? Is it possible to design systems to become more resilient to change and improve the chances for not having to perform major overhauls? I believe that it is.

I am not suggesting that we software architects are the sole contributors responsible for reaching the level of efficiency that the automobile industry has reached today, as this takes experience; industrialization; collaboration; a healthy, competitive market of parts suppliers; and adherence to automobile-industry standards and quality control (over which we in the software industry salivate). What I am suggesting is that we should consider learning from the design concepts that make the automobile industry unbelievably successful, and see how we go.

I propose that solution architects, in addition to delivering working systems to customers, are responsible for ensuring system quality with the express intention of building systems to be resilient to change. In this article, I will describe an approach for building systems that are designed to sustain changes in business processes, as well as in technology platforms. Hopefully, solution architects can leverage this discussion and reap the long-term benefits of improving their customers' return on investment (ROI).

Planning for Implementation of System Quality

When you implement system quality, it's best to start with a plan. Software quality, by definition, is the degree to which software possesses a desired combination of attributes [IEEE 1992]. Therefore, to improve system quality, we must focus our attention on software-quality attributes. Ultimately, there are only a few system-quality attribute primitives to which all system qualities can map. In my experience, I have grown quite fond of the system-quality attributes that are listed in Table 1, which include agility, flexibility, performance, reusability, security, and others. Note that these by no means represent the definitive set of system-quality attributes, nor are the definitions the only acceptable ones to consider. I use the definitions below and have been relatively successful. Be sure to come up with your own set, so that you can clearly communicate to stakeholders how system quality will affect your architecture decisions.

Table 1. System-quality attributes

Quality attribute Definition
Agility The ability of a system to both be flexible and undergo change rapidly. [MIT ESD 2001]
Flexibility The ease with which a system or component can be modified for use in applications or environments, other than those for which it was specifically designed. [Barbacci, 1995]
Interoperability The ability of two or more systems or components to exchange information and use the information that has been exchanged. [IEEE 1990]
Maintainability
  • The aptitude of a system to undergo repair and evolution. [Barbacci, 2003]
  • (1) The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment. (2) The ease with which a hardware system or component can be retained in, or restored to, a state in which it can perform its required functions. [IEEE Std. 610.12]
Performance The responsiveness of the system—that is, the time required to respond to stimuli (events) or the number of events processed in some interval of time. Performance qualities are often expressed by the number of transactions per unit time, or by the amount of time that it takes to complete a transaction with the system. [Bass, 1998]
Reliability The ability of the system to keep operating over time. Reliability is usually measured by mean time to failure. [Bass 1998]
Reusability The degree to which a software module or other work product can be used in more than one computing program or software system. This is typically in the form of reusing software that is an encapsulated unit of functionality. [IEEE 1990]
Scalability The ability to maintain or improve performance while system demand increases.
Security A measure of the system's ability to resist unauthorized attempts at usage and denial of service, while still providing its services to legitimate users. Security is categorized in terms of the types of threats that might be made to the system. [Bass, 1998]
Supportability The ease with which a software system can be operationally maintained.
Testability The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. [IEEE 1990]
Usability
  • The measure of a user's ability to utilize a system effectively. [Clements, 2002]
  • The ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. [IEEE Std. 610.12]
  • A measure of how well users can take advantage of some system functionality. Usability is different from utility, which is a measure of whether that functionality does what is needed. [Barbacci, 2003]

Back to my story regarding the station-wagon conversion. I was pleased with the amount of effort that it took to modify and extend the Ford LTD station wagon. The designers of the Ford LTD station wagon predicted that there would be a need for owners wanting to replace parts with after-market components. So, the car was designed to be maintained—in my case, upgraded—and to be compatible with parts from other car-part vendors.

There are a few system qualities that contributed to this positive experience, such as maintainability and testability, but there is one system quality that stands out: system flexibility. The Ford LTD station-wagon designers specifically implemented the Ford LTD design for flexibility—maybe not in these exact terms, but I do think that they considered designing the automobile to be compatible with parts from other automobile-part manufacturers. Of course, the primary driver for optimizing flexibility was probably not for the benefit of the after-market car-part manufacturers, but instead to give Ford the benefit of choosing from several bidding car-part manufacturers—thus allowing them to choose the right balance of quality and cost for their production model. As a consequence, car-part manufacturers benefited from after-market sales by providing upgraded replacement parts to the same flexible design specification.

A smart person once stated, "You can't manage what you can't measure"; therefore, you must think about the measures of system quality to make sure that you can plan how to go about monitoring them throughout the software life cycle. You will have to plot out just what you will expect the system designers to build, so that it can be optimized for system quality. When you plan how to implement system quality, a good approach that you can leverage is the IEEE Software Quality Metrics Methodology [IEEE 1992]. Basically, this methodology suggests performing the following steps to define and monitor system-quality metrics:

  1. Establish software-quality requirements.
  2. Identify software-quality metrics.
  3. Implement software-quality metrics.
  4. Analyze the results of these metrics.
  5. Validate the metrics.

Prioritizing System-Quality Attributes

The IEEE Software Quality Metrics Methodology [IEEE 1992] is more of a framework than a strict methodology. You must add the details yourself to make it work.

First, so as to speed up this process, prioritize the system-quality attributes before spending time identifying software-quality metrics. In other words, what is the point of spending time identifying software-quality metrics for low-priority system-quality attributes that you are not going to monitor?

Second, do not follow the serial process regarding steps 4 and 5 strictly, because it is good practice to monitor for the existence of well-engineered system-quality requirements as part of the overall solution requirements. As Karl Wiegers notes, "software-quality attributes, or quality factors, are part of the system's nonfunctional (also called non-behavioral) requirements." [Wiegers, 2003] The assumption here is that including system-quality requirements into the solution's project requirements improves the chances for a high-quality solution. For this reason, do not assume that you have to complete each step before moving to the next, as you might find yourself missing the boat on an opportunity to improve system quality. For example, you might find yourself well into the design phase of your project by the time that you complete steps 1 through 3. As you monitor and analyze the metrics regarding system-quality requirements, you might recognize that you are too late to go back and inject missing system-quality requirements for the project, which leaves you at greater risk of developing a poor-quality solution.

At this point, you should have a set of defined system-quality attributes upon which to draw, so the next step is to prioritize them for your particular solution. You should also think about how you will go about monitoring for each attribute throughout the software-development life cycle, with the aim of determining if system quality is being implemented.

Ideally, you would want to optimize for all quality attributes, but the fact is that this is nearly impossible, because any given system has trade-off points [Clements, 2002] that prevent this. A trade-off point is a property that affects one or more attributes. Essentially, changing one quality attribute often forces a change in another quality attribute—either positively or negatively. This is important, because knowing the prioritized system-quality attributes and their trade-off points aids the decision-making process during design activities.

The graph in Table 2 is taken from the book Software Requirements, Second Edition [Wiegers, 2003] and describes an example set of system-quality attributes and their respective trade-off points.

Table 2. System-quality trade-off points

  Availability Efficiency Flexibility Integrity Interoperability Maintainability Portability Reliability Reusability Robustness Testability Usability
Availability               +   +    
Efficiency     -   - - - -   - - -
Flexibility   -   -   + + +   +    
Integrity   -     -       -   - -
Interoperability   - + -     +          
Maintainability + - +         +     +  
Portability   - +   + -     +   + -
Reliability + - +     +       + + +
Reusability   - + -       -     +  
Robustness + -           +       +
Testability + - +     +   +       +
Usability   -               + -  

Here is a quick example to illustrate trade-off points: Say that there is a need for a system service that aims to provide a means for processing banking transactions. The business has noted in the requirements that it must be fast. Assuming that the system designer has not read this article, the system designer immediately begins to optimize the service for performance. In this hypothetical case, the system designer might opt to build a high-performance application by building a system that:

  • Captures system requests directly from a UDP communications port.
  • Uses proprietary message communication semantics.
  • Accepts system requests and processes application logic for system requests in a single process space.
  • Embeds application logic in procedural code.
  • Persists data in local memory space for quick put/get operations.
  • Sends responses to system calls in real time to a high-performance receiving application, preferably as close to the hardware level as possible using a proprietary binary communication transport.

A system such as this would have a number of trade-off points. The three that stick out for me are the following:

  • Interoperability—It will be difficult to interoperate, because of the unstructured, unreliable UDP communication protocol and the unsupported message formats that are not based on industry standards.
  • Flexibility—The application's decomposition does not lend itself to change or reuse for other purposes, because of the proprietary protocols, coupled messaging, application logic, and propriety memory store. The application also lacks application decomposition with encapsulated boundaries, preventing other components from being plugged into the system.
  • Reliability—UDP communication packets are unreliable, as packets could be lost. In-memory data is not persisted to disk and could be lost if the memory is released through a process fault, reboot, or other means.

It might be that these sacrifices are intentional and accepted to achieve a high level of performance—but maybe not. The business and IT owners might actually want the system to be flexible enough to withstand changes to the business processes that it supports or technology changes that are inevitable. The point is that trade-offs are made when optimizing system-quality attributes; and, if systems are designed with this understanding up front, there will be fewer surprises down the road.

There are several processes that describe how to prioritize system-quality attributes to derive system architecture, such as the Software Engineering Institute's (SEI) Attribute-Driven Design [Kazman, 2000]. In addition, there are some fairly well-documented trade-off points from SEI's Architecture Trade-Off Analysis Method (ATAM). ATAM is a very sophisticated method for determining trade-off points by way of attribute characterizations that use the Stimuli-Architectural Decision-Responses construct [Kazman, 2000].

ATAM is focused on the evaluation of software, and included in the method are techniques for identifying and prioritizing system-quality attributes. There are several other techniques or methods for identifying system-quality trade-off points in the industry. From these industry practices, the decisions of which to use and how much to use it depend on factors such as budget, resource competency, time to deliver, solution size, and complexity.

I often tend just to adopt key concepts from these, infer a set of likely important system-quality attributes from the solution requirements, and collaborate with the key stakeholders to decide on three high-priority system-quality attributes on which to focus. So, my advice to you is to understand the available approaches and weigh them for each solution, in order to determine what is best to include in your plan.

Monitoring System-Quality Attributes

Up to this point, we have defined a plan. We are now ready to define the metrics of system quality and monitor and track them throughout the software-development life cycle. The purpose of using metrics is to reduce subjectivity during monitoring activities and provide quantitative data for analysis.

So, what metrics should a solution architect inject into the solution to address system quality better? Any good system architecture has quality requirements, so this is a good place to start. To help explain this, consider flexibility, which is a specific system-quality attribute that I personally consider very important in delivering agility to the business and IT stakeholders. The following are some flexibility metrics to consider, as an example of what to monitor for.

Quality-of-Service (QoS) System Requirements

The solution architect is responsible for the integrity of a solution, and poorly defined system requirements can lead downstream to confusion and result in a poor-quality solution. So, to mitigate this, ensure that system requirements include QoS requirements specifically for those that are correlated to the prioritized system-quality attributes. Also, accompanying QoS requirements with use cases and quality scenarios [Bass Kazman, 1999] will further improve communicating the requirement to the project team and mitigate misinterpretation.

For the purposes of the flexibility example, here are two QoS examples:

  • "With no more than 1 hour of labor, a business user who has at least 6 months of experience in the field shall be able to modify core business processes automated by the system, without requiring a change in the system's source code—which, when done, will place the changes into a queue to be tested." This requirement demonstrates the ability for the system to withstand changes in the business process.
  • "A software engineer can change the Find Customer function in the solution to point from the current SAP CRM module to the Microsoft CRM–provided Customer Search service interface (including the effort required to research, design, code, unit test, and document) and to release the code changes to the testing environment within 1 day." This requirement demonstrates the ability for the system to withstand changes to IT systems.

System Patterns that Improve Quality

Solution architects should also identify system patterns that should be used to optimize for the prioritized set of system-quality attributes. In my example of system flexibility, patterns that improve system flexibility include façade [Gamma, 1995], adapter [Gamma, 1995], service layer [Fowler, 2003], and so on.

System Antipatterns that Degrade Quality

Solution architects should provide additional guidance on what system designers are to avoid, and one method to do this is to identify the antipatterns that negatively affect system quality. In my example of system flexibility, antipatterns such as shared database [Hohpe Woolf, 2004] and data replication [Fuller Morgan, 2006] degrade system flexibility.

The point is that it's up to the solution architect to identify metrics that are used to measure system quality. Unfortunately, no one has fully defined all measures based on scientific means, so this can be a challenge. Aim to define as much as you can; leverage existing metric definitions from industry practices, as a starting point; and build on it, from your experience.

As soon as the system-quality metrics are defined, embed them into the solution artifacts and monitor for their adoption throughout the software life cycle. During the build and testing phases, review the system and find deviations from what you defined. Any deviations should be considered for correction.

By implementing software quality into a solution as described in this article, you will be better positioned to give the business owner a quality solution that is more likely to withstand changes in the business and technology—thereby maximizing the ROI.

Review

Let's go through a quick summary of the three key points in this article.

  1. First, build high-quality software.
  2. Second, leverage the industry practices that guide solution architects to build high-quality software systems.
  3. Third, build a plan for implementing system quality into your solution, and avoid optimizing for all quality attributes (as this is nearly impossible to do). Instead, prioritize the quality attributes, and focus your attention on the top three. Ideally, if you succeed, you will have improved the chances that your software will last, and you might also reap the long-term benefits of improving your customer's ROI.

Critical-Thinking Questions

  • What are the most significant system-quality attributes that contribute to agility, and how would you weight them?
  • Besides QoS requirements, design patterns, and antipatterns, what other measurable metrics could you monitor to track system quality?
  • What other methods—beyond giving guidance to system designers—could you employ to improve system quality? That is, are there methods that other team members could adopt as their responsibility to improve system quality?

Sources

  • [Bachmann, 2000] Bachmann, Felix, Len Bass, Gary Chastek, Patrick Donohoe, and Fabio Peruzzi. "The Architecture-Based Design Method." Technical Report CMU/SEI-2000-TR-001 ADA375851. 2000. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA.
  • [Barbacci, 1995] Barbacci, Mario, Mark Klein, Thomas Longstaff, and Charles Weinstock. "Quality Attributes." Technical Report CMU/SEI-95-TR-021 ESC-TR-95-021. 1995. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA.
  • [Barbacci, 2003] Barbacci, Mario. "Software-Quality Attributes and Architecture Trade-Offs." 2003. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA.
  • [Bass, 1998] Bass, Len, Paul Clements, and Rick Kazman. Software Architecture in Practice. Reading, MA: Addison-Wesley Publishing Co., 1998.
  • [Bass Kazman, 1999] Bass, Len, Paul Clements, and Rick Kazman. "Architecture-Based Development." 1999. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA.
  • [Fowler, 2003] Fowler, Martin. Patterns of Enterprise Application Architecture. Boston, MA: Addison-Wesley Professional, 2003.
  • [Fuller Morgan, 2006] Fuller, Tom, and Shawn Morgan. "Data Replication as an Enterprise SOA Antipattern." 2006. The Architecture Journal, Issue 8. Microsoft Corporation, Redmond, WA.
  • [Gamma, 1995] Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Boston, MA: Addison-Wesley Professional, 1995.
  • [Hohpe Woolf, 2004] Hohpe, Gregor, and Bobby Woolf. Enterprise Integration Patterns. Boston, MA: Addison-Wesley Professional, 2004.
  • [IEEE 1990] Institute of Electrical and Electronics Engineers. Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries (610-1990). New York, NY: IEEE Press, 1990.
  • [IEEE 1992] "IEEE Standard for a Software-Quality Metrics Methodology." IEEE Std 1061-1992. 1992. Institute of Electrical and Electronics Engineers, New York, NY.
  • [Kazman, 2000] Kazman, Rick, Mark Klein, and Paul Clements. "ATAM: Method for Architecture Evaluation." Technical Report CMU/SEI-2000-TR-004 ADA382629. 2000. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA.
  • [MIT ESD 2001] Allen, Tom, Joel Moses, Dan Hastings, Seth Lloyd, John Little, Don McGowan, Chris Magee, Fred Moavenzadeh, Debbie Nightingale, Dan Roos, and Dan Whitney. "ESD Terms and Definitions (Version 12)." ESD-WP-2002-01. 2001. Massachusetts Institute of Technology, Engineering Systems Division, Cambridge, MA.
  • [Wiegers, 2003] Wiegers, Karl E. Software Requirements, Second Edition. Redmond, WA: Microsoft Press, 2003.

 

About the author

Gabriel Morgan has over 10 years' experience developing software and is currently an enterprise application architect on Microsoft's IT Enterprise Architecture team. He has extensive experience in the design, development, and implementation of distributed software systems and experience solving enterprise application problems such as flexibility, performance, and scalability. Gabriel has a patent-pending process and tool for performing system-quality reviews that is called the Microsoft System Review. His current interests lie in deriving system architecture from business strategy.

This article was published in Skyscrapr, an online resource provided by Microsoft. To learn more about architecture and the architectural perspective, please visit skyscrapr.net.