Share via


Chapter 7 – Building Secure Assemblies

 

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

Improving Web Application Security: Threats and Countermeasures

J.D. Meier, Alex Mackman, Michael Dunner, Srinath Vasireddy, Ray Escamilla and Anandha Murukan
Microsoft Corporation

Published: June 2003

Last Revised: January 2006

Applies to:

  • .NET Framework version 1.1

See the "patterns & practices Security Guidance for Applications Index" for links to additional security resources.

See the Landing Page Landing Page for the starting point and a complete overview of Improving Web Application Security: Threats and Countermeasures.

Summary: This chapter shows you how to improve the security design and implementation of your assemblies. This includes evaluating deployment considerations, following solid object-oriented programming practices, tamper proofing your code, ensuring that internal system level information is not revealed to the caller and restricting who can call your code. The chapter shows you how to improve the security of your resource access code including performing file I/O, accessing network resources, the registry, event log, calling unmanaged code and more.

Contents

In This Chapter
Overview
How to Use This Chapter
Threats and Countermeasures
Privileged Code
Assembly Design Considerations
Class Design Considerations
Strong Names
Authorization
Exception Management
File I/O
Event Log
Registry
Data Access
Unmanaged Code
Delegates
Serialization
Threading
Reflection
Obfuscation
Cryptography
Summary
Additional Resources

In This Chapter

  • Improving the security of your assemblies with simple, proven coding techniques.
  • Reducing the attack surface through well-designed interfaces and solid object oriented programming techniques.
  • Using strong names and tamperproofing your assemblies.
  • Reducing the risks associated with calling unmanaged code.
  • Writing secure resource access code including file I/O, registry, event log, database, and network access.

Overview

Assemblies are the building blocks of .NET Framework applications and are the unit of deployment, version control, and reuse. They are also the unit of trust for code access security (all the code in an assembly is equally trusted). This chapter shows you how to improve the security design and implementation of your assemblies. This includes evaluating deployment considerations, following solid object-oriented programming practices, tamperproofing your code, ensuring that internal system level information is not revealed to the caller, and restricting who can call your code.

Managed code, the .NET Framework, and the common language runtime eliminate several important security related vulnerabilities often found in unmanaged code. Type safe verification of code is a good example where the .NET Framework helps. This makes it virtually impossible for buffer overflows to occur in managed code, which all but eliminates the threat of stack-based code injection. However, if you call unmanaged code, buffer overflows can still occur. In addition, you must also consider many other issues when you write managed code.

How to Use This Chapter

The following are recommendations on how to use this chapter:

  • Use this chapter in conjunction with Chapter 8, "Code Access Security in Practice." Chapter 8 shows you how to use code access security features to further improve the security of your assemblies.
  • Use the corresponding checklist. For a summary checklist that summarizes the best practices and recommendations for both chapters, see "Checklist: Security Review for Managed Code" in the Checklists section of this guide.

Threats and Countermeasures

Understanding threats and the common types of attack helps you to identify appropriate countermeasures and allows you to build more secure and robust assemblies. The main threats are:

  • Unauthorized accessorprivilegeelevation, or both
  • Code injection
  • Information disclosure
  • Tampering

Figure 7.1 illustrates these top threats.

Ff648656.f07thcm01(en-us,PandP.10).gif

Figure 7.1

Assembly-level threats

Unauthorized Access or Privilege Elevation, or both

The risk with unauthorized access, which can lead to privilege elevation, is that an unauthorized user or unauthorized code can call your assembly and execute privileged operations and access restricted resources.

Vulnerabilities

Vulnerabilities that can lead to unauthorized access and privileged elevation include:

  • Weak or missing role-based authorization
  • Internal types and type members are inadvertently exposed
  • Insecure use of code access security assertions and link demands
  • Non-sealed and unrestricted base classes, which allow any code to derive from them

Attacks

Common attacks include:

  • A luring attack where malicious code accesses your assembly through a trusted intermediary assembly to bypass authorization mechanisms
  • An attack where malicious code bypasses access controls by directly calling classes that do not form part of the assembly's public API

Countermeasures

Countermeasures that you can use to prevent unauthorized access and privilege elevation include:

  • Use role-based authorization to provide access controls on all public classes and class members.
  • Restrict type and member visibility to limit which code is publicly accessible.
  • Sandbox privileged code and ensure that calling code is authorized with the appropriate permission demands.
  • Seal non-base classes or restrict inheritance with code access security.

Code Injection

With code injection, an attacker executes arbitrary code using your assembly's process level security context. The risk is increased if your assembly calls unmanaged code and if your assembly runs under a privileged account.

Vulnerabilities

Vulnerabilities that can lead to code injection include:

  • Poor input validation, particularly where your assembly calls into unmanaged code
  • Accepting delegates from partially trusted code
  • Over-privileged process accounts

Attacks

Common code injection attacks include:

  • Buffer overflows
  • Invoking a delegate from an untrusted source

Countermeasures

Countermeasures that you can use to prevent code injection include:

  • Validate input parameters.
  • Validate data passed to unmanaged APIs.
  • Do not accept delegates from untrusted sources.
  • Use strongly typed delegates and deny permissions before calling the delegate.
  • To further reduce risk, run assemblies using least privileged accounts.

Information Disclosure

Assemblies can suffer from information disclosure if they leak sensitive data such as exception details and clear text secrets to legitimate and malicious users alike. It is also easier to reverse engineer an assembly's Microsoft Intermediate Language (MSIL) into source code than it is with binary machine code. This presents a threat to intellectual property.

Vulnerabilities

Vulnerabilities that can lead to information disclosure include:

  • Weak or no formal exception handling
  • Hard-coded secrets in code

Attacks

Common attacks include:

  • Attempting to cause errors by passing malformed input to the assembly
  • Using ILDASM on an assembly to steal secrets

Countermeasures

Countermeasures that you can use to prevent information disclosure include:

  • Solid input validation
  • Structured exception handling and returning generic errors to the client
  • Not storing secrets in code
  • Obfuscation tools to foil decompilers and protect intellectual property

Tampering

The risk with tampering is that your assembly is modified by altering the MSIL instructions in the binary DLL or EXE assembly file.

Vulnerabilities

The primary vulnerability that makes your assembly vulnerable to tampering is the lack of a strong name signature.

Attacks

Common attacks include:

  • Direct manipulation of MSIL instructions
  • Reverse engineering MSIL instructions

Countermeasures

To counter the tampering threat, use a strong name to sign the assembly with a private key. When a signed assembly is loaded, the common language runtime detects if the assembly has been modified in any way and will not load the assembly if it has been altered.

Privileged Code

When you design and build secure assemblies, be able to identify privileged code. This has important implications for code access security. Privileged code is managed code that accesses secured resources or performs other security sensitive operations such as calling unmanaged code, using serialization, or using reflection. It is referred to as privileged code because it must be granted permission by code access security policy to be able to function. Non-privileged code only requires the permission to execute.

Privileged Resources

The types of resources for which your code requires code access security permissions include the file system, databases, registry, event log, Web services, sockets, DNS databases, directory services, and environment variables.

Privileged Operations

Other privileged operations for which your code requires code access security permissions include calling unmanaged code, using serialization, using reflection, creating and controlling application domains, creating Principal objects, and manipulating security policy.

For more information about the specific types of code access security permissions required for accessing resources and performing privileged operations, see "Privileged Code" in Chapter 8, "Code Access Security in Practice."

Assembly Design Considerations

One of the most significant issues to consider at design time is the trust level of your assembly's target environment, which affects the code access security permissions granted to your code and to the code that calls your code. This is determined by code access security policy defined by the administrator, and it affects the types of resources your code is allowed to access and other privileged operations it can perform.

When designing your assembly, you should:

  • Identify privileged code
  • Identify the trust level of your target environment
  • Sandbox highly privileged code
  • Design your public interface

Identify Privileged Code

Identify code that accesses secured resources or performs security sensitive operations. This type of code requires specific code access security permissions to function.

Identify Privileged Resources

Identify the types of resources your assembly needs to access; this allows you to identify any potential problems that are likely to occur if the environment your assembly ultimately runs in does not grant the relevant code access security permissions. In this case you are forced either to update code access security policy for your application if the administrator allows this, or you must sandbox your privileged code. For more information about sandboxing, see Chapter 9, "Using Code Access Security with ASP.NET."

Identify Privileged Operations

Also identify any privileged operations that your assembly needs to perform, again so that you know which code access permissions your code requires at runtime.

Identify the Trust Level of Your Target Environment

The target environment that your assembly is installed in is important because code access security policy may constrain what your assembly is allowed to do. If, for example, your assembly depends on the use of OLE DB, it will fail in anything less than a full trust environment.

Note   In .NET 2.0, the System.Data.OleDb managed provider no longer requires full trust. It only requires the OleDbPermission. This allows developers of partial trust environments to access non-SQL databases. For ASP.NET applications, this permission is not granted by medium trust policy, although you can create custom ASP.NET trust-level policy files to allow partial trust ASP.NET applications to use OLE DB data sources. For more information on creating custom trust-level policy, see "How To: Use Code Access Security in ASP.NET 2.0."

Full Trust Environments

Full trust means that code has an unrestricted set of code access security permissions, which allows the code to access all resource types and perform privileged operations, subject to operating system security. A full trust environment is the default environment for a Web application and supporting assemblies installed on a Web server, although this can be altered by configuring the <trust> element of the application.

Partial Trust Environment

A partial trust environment is anything less than full trust. The .NET Framework has several predefined trust levels that you can use directly or customize to meet your specific security requirements. The trust level may also be diminished by the origin of the code. For example, code on a network share is trusted less than code on the local computer and as a result is limited in its ability to perform privileged operations.

Supporting Partial Trust Callers

The risk of a security compromise increases significantly if your assembly supports partial trust callers (that is, code that you do not fully trust.) Code access security has additional safeguards to help mitigate the risk. For additional guidelines that apply to assemblies that support partial trust callers, see Chapter 8, "Code Access Security in Practice." Without additional programming, your code supports partial trust callers in the following two situations:

  • Your assembly does not have a strong name.
  • Your assembly has a strong name and includes the AllowPartiallyTrustedCallersAttribute (APTCA) assembly level attribute.

Why Worry About the Target Environment?

The trust environment that your assembly runs in is important for the following reasons:

  • A partial trust assembly can only gain access to a restricted set of resources and perform a restricted set of operations, depending upon which code access security permissions it is granted by code access security policy.
  • A partial trust assembly cannot call a strong named assembly unless it includes AllowPartiallyTrustedCallersAttribute.
  • Other partial trust assemblies may not be able to call your assembly because they do not have the necessary permissions. The permissions that a calling assembly must be able to call your assembly are determined by:
    • The types of resources your assembly accesses
    • The types of privileged operation your assembly performs

Sandbox Highly Privileged Code

To avoid granting powerful permissions to a whole application just to satisfy the needs of a few methods that perform privileged operations, sandbox privileged code and put it in a separate assembly. This allows an administrator to configure code access security policy to grant the extended permissions to the code in the specific assembly and not to the whole application.

For example, if your application needs to call unmanaged code, sandbox the unmanaged calls in a wrapper assembly, so that an administrator can grant the UnmanagedCodePermission to the wrapper assembly and not the whole application.

Note   Sandboxing entails using a separate assembly and asserting security permissions to prevent full stack walks.

For more information about sandboxing unmanaged API calls, see "Unmanaged Code" in Chapter 8, "Code Access Security in Practice."

Design Your Public Interface

Think carefully about which types and members form part of your assembly's public interface. Limit the assembly's attack surface by minimizing the number of entry points and using a well designed, minimal public interface.

Class Design Considerations

In addition to using a well defined and minimal public interface, you can further reduce your assembly's attack surface by designing secure classes. Secure classes conform to solid object oriented design principles, prevent inheritance where it is not required, limit who can call them, and which code can call them. The following recommendations help you design secure classes:

  • Restrict class and member visibility
  • Seal non base classes
  • Restrict which users can call your code
  • Expose fields using properties

Restrict Class and Member Visibility

Use the public access modifier only for types and members that form part of the assembly's public interface. This immediately reduces the attack surface because only public types are accessible by code outside the assembly. All other types and members should be as restricted as possible. Use the private access modifier wherever possible. Use protected only if the member should be accessible to derived classes and use internal only if the member should be accessible to other classes in the same assembly.

Note   C# also allows you to combine protected and internal to create a protectedinternal member to limit access to the current assembly or derived types.

Seal Non-Base Classes Where Appropriate

By default types should be unsealed. You should seal non-base classes to prevent inheritance if either of the following conditions is true:

  • If the class contains security secrets like passwords, accessible through protected APIs.
  • If the class contains many virtual members that cannot be sealed and the type is not really designed for third-party extensibility.

You prevent inheritance by using the sealed keyword as shown in the following code sample.

public sealed class NobodyDerivesFromMe
{}

Note   In Microsoft Visual Basic .NET, you can use the NotInheritable keyword at the class level, or NotOverridable at the method level.

For base classes, you can restrict which other code is allowed to derive from your class by using code access security inheritance demands. For more information, see "Authorizing Code" in Chapter 8, "Code Access Security in Practice."

Restrict Which Users Can Call Your Code

Annotate classes and methods with declarative principal permission demands to control which users can call your classes and class members. In the following example, only members of the specified Windows group can access the Orders class. A class level attribute like this applies to all class members. Declarative principal permission demands can also be used on individual methods. Method level attributes override class level attributes.

[PrincipalPermission(SecurityAction.Demand,
                     Role=@"DomainName\WindowsGroup")]
public sealed class Orders()
{
}

Also, you can strong name your assembly to ensure that partially trusted callers cannot call into it. See "Strong Names" below for more information.

Expose Fields Using Properties

Make all fields private. To make a field value accessible to external types, use a read only or a read/write property. Properties allow you to add additional constraints, such as input validation or permission demands, as shown in the following code sample.

public sealed class MyClass
{
  private string field; // field is private
  // Only members of the specified group are able to
  // access this public property
  [PrincipalPermission(SecurityAction.Demand,
          Role=@"DomainName\WindowsGroup")]
  public string Field
  {
    get {
        return field;
    }
  }
}

Strong Names

An assembly strong name consists of a text name, a version number, optionally a culture, a public key (which often represents your development organization), and a digital signature. You can see the various components of the strong name by looking into Machine.config and seeing how a strong named assembly is referenced.

The following example shows how the System.Web assembly is referenced in Machine.config. In this example, the assembly attribute shows the text name, version, culture and public key token, which is a shortened form of the public key.

<add assembly="System.Web, Version=1.0.5000.0, Culture=neutral,
               PublicKeyToken=b03f5f7f11d50a3a" />

Whether or not you should strong name an assembly depends on the way in which you intend it to be used. The main reasons for wanting to add a strong name to an assembly include:

  • You want to ensure that partially trusted code is not able to call your assembly.

    The common language runtime prevents partially trusted code from calling a strong named assembly, by adding link demands for the FullTrust permission set. You can override this behavior by using AllowPartiallyTrustedCallersAttribute (APTCA) although you should do so with caution.

    For more information about APTCA, see "APTCA" in Chapter 8, "Code Access Security in Practice."

  • The assembly is designed to be shared among multiple applications.

    In this case, the assembly should be installed in the global assembly cache. This requires a strong name. The global assembly cache supports side-by-side versioning which allows different applications to bind to different versions of the same assembly.

  • You want to use the strong name as security evidence.

    The public key portion of the strong name gives cryptographically strong evidence for code access security. You can use the strong name to uniquely identify the assembly when you configure code access security policy to grant the assembly specific code access permissions. Other forms of cryptographically strong evidence include the Authenticode signature (if you have used X.509 certificates to sign the assembly) and an assembly's hash.

    Note   Authenticode evidence is not loaded by the ASP.NET host, which means you cannot use it to establish security policy for ASP.NET Web applications.

    For more information about evidence types and code access security, see Chapter 8, "Code Access Security in Practice."

Security Benefits of Strong Names

Strong names provide a number of security advantages in addition to versioning benefits:

  • Strong named assemblies are signed with a digital signature. This protects the assembly from modification. Any tampering causes the verification process that occurs at assembly load time to fail. An exception is generated and the assembly is not loaded.

  • Strong named assemblies cannot be called by partially trusted code, unless you specifically add AllowPartiallyTrustedCallersAttribute (APTCA.)

    Note   If you do use APTCA, make sure you read Chapter 8, "Code Access Security in Practice," for additional guidelines to further improve the security of your assemblies.

  • Strong names provide cryptographically strong evidence for code access security policy evaluation. This allows administrators to grant permissions to specific assemblies. It also allows developers to use a StrongNameIdentityPermission to restrict which code can call a public member or derive from a non-sealed class.

    **Note   **In .NET 2.0, StrongNameIdentityPermission only works for partial trust callers. Any demand, including a link demand, will always succeed for full trust callers regardless of the strong name of the calling code.

Using Strong Names

The .NET Framework includes the Sn.exe utility to help you strong name assemblies. You do not need an X.509 certificate to add a strong name to an assembly.

To strong name an assembly

  1. Generate the key file in the assembly's project directory by using the following command.

    sn.exe -k keypair.snk
    
  2. Add an AssemblyKeyFile attribute to Assemblyinfo.cs to reference the generated key file, as shown in the following code sample.

    // The keypair file is usually placed in the project directory
    [assembly: AssemblyKeyFile(@"..\..\keypair.snk")]
    

    Note   In .NET 2.0, strong naming and key pair generation is also available through the Signing pane in the Project Designer of the Microsoft Visual Studio 2005 project. For more information, see "How to: Sign an Assembly (Visual Studio)."

Delay Signing

It is good security practice to delay sign your assemblies during application development. This results in the public key being placed in the assembly, which means that it is available as evidence to code access security policy, but the assembly is not signed, and as a result is not yet tamper proof. From a security perspective, delay signing has two main advantages:

  • The private key used to sign the assembly and create its digital signature is held securely in a central location. The key is only accessible by a few trusted personnel. As a result, the chance of the private key being compromised is significantly reduced.
  • A single public key, which can be used to represent the development organization or publisher of the software, is used by all members of the development team, instead of each developer using his or her own public, private key pair, typically generated by the snk command.

To create a public key file for delay signing

This procedure is performed by the signing authority to create a public key file that developers can use to delay sign their assemblies.

  1. Create a key pair for your organization.

    sn.exe -k keypair.snk
    
  2. Extract the public key from the key pair file.

    sn –p keypair.snk publickey.snk
    
  3. Secure Keypair.snk, which contains both the private and public keys. For example, put it on a floppy or CD and physically secure it.

  4. Make Publickey.snk available to all developers. For example, put it on a network share.

To delay sign an assembly

This procedure is performed by developers.

Note   In .NET 2.0, the delay signing is also available through the Signing pane in the Project Designer of the Visual Studio 2005 project. For more information, see "How to: Delay Sign an Assembly (Visual Studio)."

  1. Add an assembly level attribute to reference the key file that contains only the public key.

    // The keypair file is usually placed in the project directory
    [assembly: AssemblyKeyFile(@"..\..\publickey.snk")]
    
  2. Add the following attribute to indicate delay signing.

    [assembly: AssemblyDelaySign(true)]
    
  3. The delay signing process and the absence of an assembly signature means that the assembly will fail verification at load time. To work around this, use the following commands on development and test computers.

    • To disable verification for a specific assembly, use the following command.

      sn -Vr assembly.dll
      
    • To disable verification for all assemblies with a particular public key, use the following command.

      sn -Vr *,publickeytoken
      
    • To extract the public key and key token (a truncated hash of the public key), use the following command.

      sn -Tp assembly.dll
      

      Note   Use a capital –T switch.

  4. To fully complete the signing process and create a digital signature to make the assembly tamper proof, execute the following command. This requires the private key and as a result the operation is normally performed as part of the formal build/release process.

    sn -r assembly.dll keypair.snk
    

ASP.NET and Strong Names

At the time of this writing, it is not possible to use a strong name for an ASP.NET Web page assembly because of the way it is dynamically compiled. Even if you use a code-behind file to create a precompiled assembly that contains your page class implementation code, ASP.NET dynamically creates and compiles a class that contains your page's visual elements. This class derives from your page class, which again means that you cannot use strong names.

Note   You can strong name any other assembly that is called by your Web page code, for example an assembly that contains resource access, data access or business logic code, although the assembly must be placed in the global assembly cache.

Note   In .NET 2.0, the ASP.NET Web applications can be precompiled by using the Visual Studio 2005 or aspnet_compiler.exe command line utility. Therefore, it is now possible to strong name ASP.NET applications. For more information, see "How to: Sign Assemblies for Precompiled Web Sites."

Global Assembly Cache Requirements

Any strong named assembly called by an ASP.NET Web application configured for partial trust should be installed in the global assembly cache. This is because the ASP.NET host loads all strong-named assemblies as domain-neutral.

The code of a domain-neutral assembly is shared by all application domains in the ASP.NET process. This creates problems if a single strong named assembly is used by multiple Web applications and each application grants it varying permissions or if the permission grant varies between application domain restarts. In this situation, you may see the following error message: "Assembly <assembly>.dll security permission grant set is incompatible between appdomains."

To avoid this error, you must place strong named assemblies in the global assembly cache and not in the application's private \bin directory.

Authenticode vs. Strong Names

Authenticode and strong names provide two different ways to digitally sign an assembly. Authenticode enables you to sign an assembly using an X.509 certificate. To do so, you use the SignCode.msi utility, which adds the public key part of a full X.509 certificate to the assembly. This ensures trust through certificate chains and certificate authorities. With Authenticode (unlike strong names,) the implementation of publisher trust is complex and involves network communication during the verification of publisher identity.

Authenticode signatures and strong names were developed to solve separate problems and you should not confuse them. Specifically:

  • A strong name uniquely identifies an assembly.

  • An Authenticode signature uniquely identifies a code publisher.

    Authenticode signatures should be used for mobile code, such as controls and executables downloaded via Internet Explorer, to provide publisher trust and integrity.

You can configure code access security (CAS) policy using both strong names and Authenticode signatures in order to grant permissions to specific assemblies. However, the Publisher evidence object, obtained from an Authenticode signature is only created by the Internet Explorer host and not by the ASP.NET host. Therefore, on the server side, you cannot use an Authenticode signature to identify a specific assembly (through a code group.) Use strong names instead.

For more information about CAS, CAS policy and code groups, see Chapter 8, "Code Access Security in Practice."

Table 7.1 compares the features of strong names and Authenticode signatures.

Table 7.1   A Comparison of Strong Names and Authenticode Signatures

Feature Strong Name Authenticode
Unique identification of assembly Yes No
Unique identification of publisher Not necessarily.

Depends on assembly developer using a public key to represent the publisher

Yes
The public key of the publisher can be revoked No Yes
Versioning Yes No
Namespace and type name uniqueness Yes No
Integrity (checks assembly has not been tampered with) Yes Yes
Evidence used as input to CAS policy Yes IE host — Yes

ASP.NET host — No

User input required for trust decision No Yes (pop-up dialog box)

Authorization

There are two types of authorization that you can use in your assemblies to control access to classes and class members:

  • Role-based authorization to authorize access based on user identity and role-membership. When you use role-based authorization in assemblies that are part of an ASP.NET Web application or Web service, you authorize the identity that is represented by an IPrincipal object attached to the current Web request and available through Thread.CurrentPrincipal and HttpContext.Current.User. This identity is either the authenticated end user identity or the anonymous Internet user identity. For more information about using principal-based authorization in Web applications, see "Authorization" in Chapter 10, "Building Secure ASP.NET Pages and Controls."
  • Code access security to authorize calling code, based on evidence, such as an assembly's strong name or location. For more information, see the "Authorization" section in Chapter 8, "Code Access Security in Practice."

Exception Management

Do not reveal implementation details about your application in exception messages returned to the client. This information can help malicious users plan attacks on your application. To provide proper exception management:

  • Use structured exception handling.
  • Do not log sensitive data.
  • Do not reveal system or sensitive application information.
  • Consider exception filter issues.
  • Consider an exception management framework.

Use Structured Exception Handling

Microsoft Visual C# and Microsoft Visual Basic .NET provide structured exception handling constructs. C# provides the try / catch and finally construct. Protect code by placing it inside try blocks and implement catch blocks to log and process exceptions. Also use the finally construct to ensure that critical system resources such as connections are closed irrespective of whether an exception condition occurs.

try
{
   // Code that could throw an exception
}
catch (SomeExceptionType ex)
{
   // Code to handle the exception and log details to aid
   // problem diagnosis
}
finally
{
   // This code is always run, regardless of whether or not
   // an exception occurred. Place clean up code in finally
   // blocks to ensure that resources are closed and/or released.
}

Use structured exception handling instead of returning error codes from methods because it is easy to forget to check a return code and as a result fail to an insecure mode.

Do Not Log Sensitive Data

The rich exception details included in Exception objects are valuable to developers and attackers alike. Log details on the server by writing them to the event log to aid problem diagnosis. Avoid logging sensitive or private data such as user passwords. Also make sure that exception details are not allowed to propagate beyond the application boundary to the client as described in the next topic.

Do Not Reveal Sensitive System or Application Information

Do not reveal too much information to the caller. Exception details can include operating system and .NET Framework version numbers, method names, computer names, SQL command statements, connection strings, and other details that are very useful to attackers. Log detailed error messages at the server and return generic error messages to the end user.

In the context of an ASP.NET Web application or Web service, this can be done with the appropriate configuration of the <customErrors> element. For more information, see Chapter 10, "Building Secure ASP.NET Pages and Controls."

Consider Exception Filter Issues

If your code uses exception filters, your code is potentially vulnerable to security issues because code in a filter higher up the call stack can run before code in a finally block. Make sure you do not rely on state changes in the finally block because the state change will not occur before the exception filter executes. For example, consider the following code:

// Place this code into a C# class library project
public class SomeClass
{
  public void SomeMethod()
  {
    try
    {
      // (1) Generate an exception
      Console.WriteLine("1> About to encounter an exception condition");
      // Simulate an exception
      throw new Exception("Some Exception");
    }
    // (3) The finally block
    finally
    {
      Console.WriteLine("3> Finally");
    }
  }
}

// Place this code into a Visual Basic.NET console application project and
// reference the above class library code
Sub Main()
    Dim c As New SomeClass
    Try
        c.SomeMethod()
    Catch ex As Exception When Filter()
        ' (4) The exception is handled
        Console.WriteLine("4> Main: Catch ex as Exception")
    End Try
End Sub

' (2) The exception filter
Public Function Filter() As Boolean
    ' Malicious code could do something here if you are relying on a state
    ' change in the Finally block in SomeClass in order to provide security
    Console.WriteLine("2> Filter")
    Return True ' Indicate that the exception is handled
End Function

In the above example, Visual Basic .NET is used to call the C# class library code because Visual Basic .NET supports exception filters, unlike C#.

If you create two projects and then run the code, the output produced is shown below:

1> About to encounter an exception condition
2> Filter
3> Finally
4> Main: Catch ex as Exception

From this output, you can see that the exception filter executes before the code in the finally block. If your code sets state that affects a security decision in the finally block, malicious code that calls your code could add an exception filter to exploit this vulnerability.

A solution to this issue is to use a catch block in your SomeMethod method to handle exceptions and to prevent exceptions from propagating. Preventing the exception from propagating from the catch block ensures that exception filter code higher in the call stack does not execute.

Consider an Exception Management Framework

A formalized exception management system can help improve system supportability and maintainability and ensure that you detect, log, and process exceptions in a consistent manner.

For information about how to create an exception management framework and about best practice exception management for .NET applications, see "Exception Management in .NET" in the MSDN Library at https://msdn.microsoft.com/en-us/library/ms954599.aspx.

File I/O

Canonicalization issues are a major concern for code that accesses the file system. If you have the choice, do not base security decisions on input file names because of the many ways that a single file name can be represented. If your code needs to access a file using a user-supplied file name, take steps to ensure your assembly cannot be used by a malicious user to gain access to or overwrite sensitive data.

The following recommendations help you improve the security of your file I/O:

  • Avoid untrusted input for file names.
  • Do not trust environment variables.
  • Validate input filenames.
  • Constrain file I/O within your application's context.

Avoid Untrusted Input for File Names

Avoid writing code that accepts file or path input from the caller and instead use fixed file names and locations when reading and writing data. This ensures your code cannot be coerced into accessing arbitrary files.

Do Not Trust Environment Variables

Try to use absolute file paths where you can. Do not trust environment variables to construct file paths because you cannot guarantee the value of the environment variable.

Validate Input File Names

If you do need to receive input file names from the caller, make sure that the filename is strictly formed so that you can determine whether it is valid. Specifically, there are two aspects to validating input file paths. You need to:

  • Check for valid file system names.
  • Check for a valid location, as defined by your application's context. For example, are they within the directory hierarchy of your application?

To validate the path and file name, use the System.IO.Path.GetFullPath method as shown in the following code sample. This method also canonicalizes the supplied file name.

using System.IO;

public static string ReadFile(string filename)
{
  // Obtain a canonicalized and valid filename
  string name = Path.GetFullPath(filename);
  // Now open the file
}

As part of the canonicalization process, GetFullPath performs the following checks:

  • It checks that the file name does not contain any invalid characters, as defined by Path.InvalidPathChars.
  • It checks that the file name represents a file and not an another device type such as a physical drive, a named pipe, a mail slot or a DOS device such as LPT1, COM1, AUX, and other devices.
  • It checks that the combined path and file name is not too long.
  • It removes redundant characters such as trailing dots.
  • It rejects file names that use the //?/ format.

Constrain File I/O Within Your Application's Context

After you know you have a valid file system file name, you often need to check that it is valid in your application's context. For example, you may need to check that it is within the directory hierarchy of your application and to make sure your code cannot access arbitrary files on the file system. For more information about how to use code access security to constrain file I/O, see "File I/O" in Chapter 8, "Code Access Security in Practice."

Event Log

When you write event-logging code, consider the threats of tampering and information disclosure. For example, can an attacker retrieve sensitive data by accessing the event logs? Can an attacker cover tracks by deleting the logs or erasing particular records?

Direct access to the event logs using system administration tools such as the Event Viewer is restricted by Windows security. Your main concern should be to ensure that the event logging code you write cannot be used by a malicious user for unauthorizedaccess to the event log.

To prevent the disclosure of sensitive data, do not log it in the first place. For example, do not log account credentials. Also, your code cannot be exploited to read existing records or to delete event logs if all it does is write new records using EventLog.WriteEvent. The main threat to address in this instance is how to prevent a malicious caller from calling your code a million or so times in an attempt to force a log file cycle to overwrite previous log entries to cover tracks. The best way of approaching this problem is to use an out-of-band mechanism, for example, by using Windows instrumentation to alert operators as soon as the event log approaches its threshold.

Finally, you can use code access security and the EventLogPermission to put specific constraints on what your code can do when it accesses the event log. For example, if you write code that only needs to read records from the event log it should be constrained with an EventLogPermissin that only supports browse access. For more information about how to constrain event logging code, see "Event Log" in Chapter 8, "Code Access Security in Practice."

Registry

The registry can provide a secure location for storing sensitive application configuration data, such as encrypted database connection strings. You can store configuration data under the single, local machine key (HKEY_LOCAL_MACHINE) or under the current user key (HKEY_CURRENT_USER). Either way, make sure you encrypt the data using DPAPI and store the encrypted data, not the clear text.

HKEY_LOCAL_MACHINE

If you store configuration data under HKEY_LOCAL_MACHINE, remember that any process on the local computer can potentially access the data. To restrict access, apply a restrictive access control list (ACL) to the specific registry key to limit access to administrators and your specific process or thread token. If you use HKEY_LOCAL_MACHINE, it does make it easier at installation time to store configuration data and also to maintain it later on.

HKEY_CURRENT_USER

If your security requirements dictate an even less accessible storage solution, use a key under HKEY_CURRENT_USER. This approach means that you do not have to explicitly configure ACLs because access to the current user key is automatically restricted based on process identity.

HKEY_CURRENT_USER allows more restrictive access because a process can only access the current user key, if the user profile associated with the current thread or process token is loaded.

The .NET Framework loads the user profile for the ASPNET account on Windows 2000. On Windows Server 2003, the profile for this account is only loaded if the ASP.NET process model is used. It is not loaded explicitly by Internet Information Services (IIS) 6 if the IIS 6 process model is used on Windows Server 2003.

Reading from the Registry

The following code fragment shows how to read an encrypted database connection string from under the HKEY_CURRENT_USER key using the Microsoft.Win32.Registry class.

using Microsoft.Win32;
public static string GetEncryptedConnectionString()
{
  return (string)Registry.
                 CurrentUser.
                 OpenSubKey(@"SOFTWARE\YourApp").
                 GetValue("connectionString");
}

For more information about how to use the code access security RegistryPermission to constrain registry access code for example to limit it to specific keys, see "Registry" in Chapter 8, "Code Access Security in Practice."

Data Access

Two of the most important factors to consider when your code accesses a database are how to manage database connection strings securely and how to construct SQL statements and validate input to prevent SQL injection attacks. Also, when you write data access code, consider the permission requirements of your chosen ADO.NET data provider. For detailed information about these and other data access issues, see Chapter 14, "Building Secure Data Access."

For information about how to use SqlClientPermission to constrain data access to SQL Server using the ADO.NET SQL Server data provider, see "Data Access" in Chapter 8, "Code Access Security in Practice."

Unmanaged Code

If you have existing COM components or Win32 DLLs that you want to reuse, use the Platform Invocation Services (P/Invoke) or COM Interop layers. When you call unmanaged code, it is vital that your managed code validates each input parameter passed to the unmanaged API to guard against potential buffer overflows. Also, be careful when handling output parameters passed back from the unmanaged API.

You should isolate calls to unmanaged code in a separate wrapper assembly. This allows you to sandbox the highly privileged code and to isolate the code access security permission requirements to a specific assembly. For more details about sandboxing and about additional code access security related guidelines that you should apply when calling unmanaged code, see "Unmanaged Code" in Chapter 8, "Code Access Security in Practice." The following recommendations help improve the security of your unmanaged API calls, without using explicit code access security coding techniques:

  • Validate input and output string parameters.
  • Validate array bounds.
  • Check file path lengths.
  • Compile unmanaged code with the /GS switch.
  • Inspect unmanaged code for "dangerous" APIs.

Validate Input and Output String Parameters

String parameters passed to unmanaged APIs are a prime source of buffer overflows. Check the length of any input string inside your wrapper code to ensure it does not exceed the limit defined by the unmanaged API. If the unmanaged API accepts a character pointer you may not know the maximum permitted string length, unless you have access to the unmanaged source. For example, the following is a common vulnerability.

void SomeFunction( char *pszInput )
{
  char szBuffer[10];
  // Look out, no length checks. Input is copied straight into the buffer
  // Check length or use strncpy
  strcpy(szBuffer, pszInput);
  . . .
}

If you cannot examine the unmanaged code because you do not own it, make sure that you rigorously test the API by passing in deliberately long input strings.

If your code uses a StringBuilder to receive a string passed from an unmanaged API, make sure that it can hold the longest string that the unmanaged API can hand back.

Validate Array Bounds

If you pass input to an unmanaged API using an array, check that the managed wrapper verifies that the capacity of the array is not exceeded.

Check File Path Lengths

If the unmanaged API accepts a file name and path, check that it does not exceed 260 characters. This limit is defined by the Win32 MAX_PATH constant. It is very common for unmanaged code to allocate buffers of this length to manipulate file paths.

Note   Directory names and registry keys can only be a maximum of 248 characters long.

Compile Unmanaged Code With the /GS Switch

If you own the unmanaged code, compile it using the /GS switch to enable stack probes to help detect buffer overflows. For more information about the /GS switch, see Microsoft Knowledge Base article 325483, "WebCast: Compiler Security Checks: The -GS compiler switch."

Inspect Unmanaged Code for Dangerous APIs

If you have access to the source code for the unmanaged code that you are calling, you should subject it to a thorough code review, paying particular attention to parameter handling to ensure that buffer overflows are not possible and that it does not use potentially dangerous APIs. For more information see Chapter 21, "Code Review."

Delegates

Delegates are the managed equivalent of type safe function pointers and are used by the .NET Framework to support events. The delegate object maintains a reference to a method, which is called when the delegate is invoked. Events allow multiple methods to be registered as event handlers. When the event occurs, all event handlers are called.

Do Not Accept Delegates from Untrusted Sources

If your assembly exposes a delegate or an event, be aware that any code can associate a method with the delegate and you have no advance knowledge of what the code does. The safest policy is not to accept delegates from untrusted callers. If your assembly is strong named and does not include the AllowPartiallyTrustedCallersAttribute, only Full Trust callers can pass you a delegate.

If your assembly supports partial trust callers, consider the additional threat of being passed a delegate by malicious code. For risk mitigation techniques to address this threat, see the "Delegates" section in Chapter 8, "Code Access Security in Practice."

Serialization

You may need to add serialization support to a class if you need to be able to marshal it by value across a .NET remoting boundary (that is, across application domains, processes, or computers) or if you want to be able to persist the object state to create a flat data stream, perhaps for storage on the file system.

By default, classes cannot be serialized. A class can be serialized if it is marked with the SerializableAttribute or if it derives from ISerializable. If you use serialization:

  • Do not serialize sensitive data.
  • Validate serialized data streams.

Do Not Serialize Sensitive Data

Ideally, if your class contains sensitive data, do not support serialization. If you must be able to serialize your class and it contains sensitive data, avoid serializing the fields that contain the sensitive data. To do this, either implement ISerializable to control the serialization behavior or decorate fields that contain sensitive data with the [NonSerialized] attribute. By default, all private and public fields are serialized.

The following example shows how to use the [NonSerialized] attribute to ensure a specific field that contains sensitive data cannot be serialized.

[Serializable]
public class Employee {
  // OK for name to be serialized
  private string name;
  // Prevent salary being serialized
  [NonSerialized] private double annualSalary;
  . . .
}

Alternatively, implement the ISerializable interface and explicitly control the serialization process. If you must serialize the sensitive item or items of data, consider encrypting the data first. The code that de-serializes your object must have access to the decryption key.

Validate Serialized Data Streams

When you create an object instance from a serialized data stream, do not assume the stream contains valid data. To avoid potentially damaging data being injected into the object, validate each field as it is reconstituted as shown in the following code sample.

public void DeserializationMethod(SerializationInfo info, StreamingContext cntx)
{
  string someData = info.GetString("someName");
  // Use input validation techniques to validate this data.
}

For more information about input validation techniques, see "Input Validation" in Chapter 10, "Building Secure ASP.NET Pages and Controls."

Partial Trust Considerations

If your code supports partial trust callers, you need to address additional threats. For example, malicious code might pass a serialized data stream or it might attempt to serialize the data on your object. For risk mitigation techniques to address these threats, see "Serialization" in Chapter 8, "Code Access Security in Practice."

Threading

Bugs caused by race conditions in multithreaded code can result in security vulnerabilities and generally unstable code that is subject to timing-related bugs. If you develop multithreaded assemblies, consider the following recommendations:

  • Do not cache the results of security checks.
  • Consider impersonation tokens.
  • Synchronize static class constructors.
  • Synchronize Dispose methods.

Do Not Cache the Results of Security Checks

If your multithreaded code caches the results of a security check, perhaps in a static variable, the code is potentially vulnerable as shown in the following code sample.

   public void AccessSecureResource()
   {
     _callerOK = PerformSecurityDemand();
     OpenAndWorkWithResource();
     _callerOK = false;
   }
   private void OpenAndWorkWithResource()
   {
     if (_callerOK)
       PerformTrustedOperation();
     else
     {
       PerformSecurityDemand();
       PerformTrustedOperation();
     }
   }

If there are other paths to OpenAndWorkWithResource, and a separate thread calls the method on the same object, it is possible for the second thread to omit the security demand, because it sees _callerOK=true, set by another thread.

Consider Impersonation Tokens

When you create a new thread, it assumes the security context defined by the process level token. If a parent thread is impersonating while it creates a new thread, the impersonation token is not passed to the new thread.

Note   In .NET 2.0, by default, the impersonation token still does not flow across threads. However, for ASP.NET applications, you can change this default behavior by configuring the ASPNET.config file in the %Windir%Microsoft.NET\Framework\{Version} directory. For more information see "Threading" section in Security Guidelines .NET Framework 2.0."

Synchronize Static Class Constructors

If you use static class constructors, make sure they are not vulnerable to race conditions. If, for example, they manipulate static state, add thread synchronization to avoid potential vulnerabilities.

Synchronize Dispose Methods

If you develop non-synchronized Dispose implementations, the Dispose code may be called more than once on separate threads. The following code sample shows an example of this.

void Dispose()
{
  if (null != _theObject)
  {
    ReleaseResources(_theObject);
    _theObject = null;
  }
}

In this example, it is possible for two threads to execute the code before the first thread has set _theObject reference to null. Depending on the functionality provided by the ReleaseResources method, security vulnerabilities may occur.

Reflection

With reflection, you can dynamically load assemblies, discover information about types, and execute code. You can also obtain a reference to an object and get or set its private members. This has a number of security implications:

  • If your code uses reflection to reflect on other types, make sure that only trusted code can call you. Use code access security permission demands to authorize calling code. For more information, see Chapter 8, "Code Access Security in Practice."
  • If you dynamically load assemblies, for example, by using System.Reflection.Assembly.Load, do not use assembly or type names passed to you from untrusted sources.
  • If your assemblies dynamically generate code to perform operations for a caller, make sure the caller is in no way able to influence the code that is generated. This issue is more significant if the caller operates at a lower trust level than the assembly that generates code.
  • If your code generation relies on input from the caller, be especially vigilant for security vulnerabilities. Validate any input string used as a string literal in your generated code and escape quotation mark characters to make sure the caller cannot break out of the literal and inject code. In general, if there is a way that the caller can influence the code generation such that it fails to compile, there is probable security vulnerability.

For more information, see "Secure Coding Guidelines for the .NET Framework" in the MSDN Library.

Obfuscation

If you are concerned with protecting intellectual property, you can make it extremely difficult for a decompiler to be used on the MSIL code of your assemblies, by using an obfuscation tool. An obfuscation tool confuses human interpretation of the MSIL instructions and helps prevent successful decompilation.

Obfuscation is not foolproof and you should not build security solutions that rely on it. However, obfuscation does address threats that occur because of the ability to reverse engineer code. Obfuscation tools generally provide the following benefits:

  • They help protect your intellectual property.
  • They obscure code paths. This makes it harder for an attacker to crack security logic.
  • They mangle the names of internal member variables. This makes it harder to understand the code.
  • They encrypt strings. Attackers often attempt to search for specific strings to locate key sensitive logic. String encryption makes this much harder to do.

A number of third-party obfuscation tools exist for the .NET Framework. One tool, the Community Edition of the Dotfuscator tool by PreEmptive Solutions, is included with the Visual Studio .NET development system. It is also available from http://www.preemptive.com/dotfuscator.html. For more information, see the list of obfuscator tools listed at https://msdn.microsoft.com/en-us/vcsharp/aa336818.aspx.

Cryptography

Cryptography is one of the most important tools that you can use to protect data. Encryption can be used to provide data privacy and hash algorithms, which produce a fixed and condensed representation of data, can be used to make data tamperproof. Also, digital signatures can be used for authentication purposes.

You should use encryption when you want data to be secure in transit or in storage. Some encryption algorithms perform better than others while some provide stronger encryption. Typically, larger encryption key sizes increase security.

Two of the most common mistakes made when using cryptography are developing your own encryption algorithms and failing to secure your encryption keys. Encryption keys must be handled with care. An attacker armed with your encryption key can gain access to your encrypted data.

The main issues to consider are:

  • Use platform-provided cryptographic services
  • Key generation
  • Key storage
  • Key exchange
  • Key maintenance

Use Platform-provided Cryptographic Services

Do not create your own cryptographic implementations. It is extremely unlikely that these implementations will be as secure as the industry standard algorithms provided by the platform; that is, the operating system and the .NET Framework. Managed code should use the algorithms provided by the System.Security.Cryptography namespace for encryption, decryption, hashing, random number generating, and digital signatures.

Many of the types in this namespace wrap the operating system CryptoAPI, while others implement algorithms in managed code.

Key Generation

The following recommendations apply when you create encryption keys:

  • Generate random keys.
  • Use PasswordDeriveBytes for password-based encryption.
  • Prefer large keys.

Generate Random Keys

If you need to generate encryption keys programmatically, use RNGCryptoServiceProvider for creating keys and initialization vectors and do not use the Random class. Unlike the Random class, RNGCryptoServiceProvider creates cryptographically strong random numbers which are FIPS-140 compliant. The following code shows how to use this function.

using System.Security.Cryptography;
. . .
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
byte[] key = new byte[keySize];
rng.GetBytes(key);

Use PasswordDeriveBytes for Password-Based Encryption

The System.Security.Cryptography.DeriveBytes namespace provides PasswordDeriveBytes for use when encrypting data based on a password the user supplies. To decrypt, the user must supply the same password used to encrypt.

Note that this approach is not for password authentication. Store a password verifier in the form of a hash value with a salt value order to authenticate a user's password. Use PasswordDeriveBytes to generate keys for password-based encryption.

PasswordDeriveBytes accepts a password, salt, an encryption algorithm, a hashing algorithm, key size (in bits), and initialization vector data to create a symmetric key to be used for encryption.

Note   Although .NET 2.0 still supports PasswordDeriveBytes for backward compatibility with .NET 1.1, you should use Rfc2898DeriveBytes. It supports the RSA Password-Based Key Derivation Function version 2 (PBKDF2), which is an improved version of the PBKDF1 standard implementation used by PasswordDeriveBytes. For more information, see ".NET Framework Class Library - Rfc2898DeriveBytes Class."

After the key is used to encrypt the data, clear it from memory but persist the salt and initialization vector. These values should be protected and are needed to re-generate the key for decryption.

For more information about storing password hashes with salt, see Chapter 14, "Building Secure Data Access."

Prefer Large Keys

When generating an encryption key or key pair, use the largest key size possible for the algorithm. This does not necessarily make the algorithm more secure but dramatically increases the time needed to successfully perform a brute force attack on the key. The following code shows how to find the largest supported key size for a particular algorithm.

private int GetLargestSymKeySize(SymmetricAlgorithm symAlg)
{
  KeySizes[] sizes = symAlg.LegalKeySizes;
  return sizes[sizes.Length].MaxSize;
}

private int GetLargestAsymKeySize(AsymmetricAlgorithm asymAlg)
{
  KeySizes[] sizes = asymAlg.LegalKeySizes;
  return sizes[sizes.Length].MaxSize;
}

Key Storage

Where possible, you should use a platform-provided encryption solution that enables you to avoid key management in your application. However, at times you need to use encryption solutions that require you to store keys. Using a secure location to store the key is critical. Use the following techniques to help prevent key storage vulnerabilities:

  • Use DPAPI to avoid key management.
  • Do not store keys in code.
  • Restrict access to persisted keys.

Use DPAPI to Avoid Key Management

DPAPI is a native encryption/decryption feature provided by Microsoft Windows 2000. One of the main advantages of using DPAPI is that the encryption key is managed by the operating system, because the key is derived from the password that is associated with the process account (or thread account if the thread is impersonating) that calls the DPAPI functions.

Note   In .NET 2.0, you no longer need to use P/Invoke. Instead, you can use the new ProtectedData class, which contains two static methods: Protect and Unprotect. For more information, see ".NET Framework Class Library - ProtectedData Class."

User Key vs. Machine Key

You can perform encryption with DPAPI using either the user key or the machine key. By default, DPAPI uses a user key. This means that only a thread that runs under the security context of the user account that encrypted the data can decrypt the data. You can instruct DPAPI to use the machine key by passing the CRYPTPROTECT_LOCAL_MACHINE flag to the CryptProtectData API. In this event, any user on the current computer can decrypt the data.

The user key option can be used only if the account used to perform the encryption has a loaded user profile. If you run code in an environment where the user profile is not loaded, you cannot easily use the user store and should opt for the machine store instead.

The .NET Framework loads the user profile for the ASPNET account on Windows 2000. On Windows Server 2003, the profile for this account is only loaded if the ASP.NET process model is used. It is not loaded explicitly by IIS 6 if the IIS 6 process model is used on Windows Server 2003.

If you use the machine key option, you should use an ACL to secure the encrypted data, for example in a registry key, and use this approach to limit which users have access to the encrypted data. For added security, you should also pass an optional entropy value to the DPAPI functions.

Note   An entropy value is an additional random value that can be passed to the DPAPI CryptProtectData and CryptUnprotectData functions. The same value that is used to encrypt the data must be used to decrypt the data. The machine key option means that any user on the computer can decrypt the data. With added entropy, the user must also know the entropy value.

The drawback with using entropy is that you must manage the entropy value as you would manage a key. To avoid entropy management issues, use the machine store without entropy and validate users and code (using code access security) thoroughly before calling the DPAPI code.

For more information about using DPAPI from ASP.NET Web applications, see "How To: Create a DPAPI Library," in the How To section of "Building Secure ASP.NET Applications," at https://msdn.microsoft.com/en-us/library/aa302402.aspx.

Do Not Store Keys in Code

Do not store keys in code because hard-coded keys in your compiled assembly can be disassembled using tools similar to ILDASM, which will render your key in plaintext.

Restrict Access to Persisted Keys

When storing keys in persistent storage to be used at runtime, use appropriate ACLs and limit access to the key. Access to the key should be granted only to Administrators, SYSTEM, and the identity of the code at runtime, for example the ASPNET or Network Service account.

When backing up a key, do not store it in plain text, encrypt it using DPAPI or a strong password and place it on removable media.

Key Exchange

Some applications require the secure exchange of encryption keys over an insecure network. You may need to verbally communicate the key or send it through secure e-mail. A more secure method to exchange a symmetric key is to use public key encryption. With this approach, you encrypt the symmetric key to be exchanged by using the other party's public key from a certificate that can be validated. A certificate is considered valid when:

  • It is being used within the date ranges as specified in the certificate.
  • All signatures in the certificate chain can be verified.
  • It is of the correct type. For example, an e-mail certificate is not being used as a Web server certificate.
  • It can be verified up to a trusted root authority.
  • It is not on a Certificate Revocation List (CRL) of the issuer.

Key Maintenance

Security is dependent upon keeping the key secure over a prolonged period of time. Apply the following recommendations for key maintenance:

  • Cycle keys periodically.
  • Key Compromise.

Cycle Keys Periodically

You should change your encryption keys from time to time because a static secret is more likely to be discovered over time. Did you write it down somewhere? Did Bob the administrator with the secrets change positions in your company or leave the company? Are you using the same session key to encrypt communication for a long time? Do not overuse keys.

Key Compromise

Keys can be compromised in a number of ways. For example, you may lose the key or discover that an attacker has stolen or discovered the key.

If your private key used for asymmetric encryption and key exchange is compromised, do not continue to use it, and notify the users of the public key that the key has been compromised. If you used the key to sign documents, they need to be re-signed.

If the private key of your certificate is compromised, contact the issuing certification authority to have your certificate placed on a certificate revocation list. Also, change the way your keys are stored to avoid a future compromise.

Summary

This chapter has shown you how to apply various techniques to improve the security of your managed code. The techniques in this chapter can be applied to all types of managed assemblies including Web pages, controls, utility libraries, and others. For specific recommendations that apply to specific types of assemblies, see the other building chapters in Part III of this guide.

To further improve the security of your assemblies, you can use explicit code access security coding techniques, which are particularly important if your assemblies support partial trust callers. For more information about using code access security, see Chapter 8, "Code Access Security in Practice."

Additional Resources

For additional related reading, refer to the following resources:

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

© Microsoft Corporation. All rights reserved.