Security Question List: ASP.NET 2.0

 

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

patterns & practices Developer Center

J.D. Meier, Alex Mackman, Blaine Wastell, Prashant Bansode, Jason Taylor, Rudolph Araujo

Microsoft Corporation

October 2005

Applies To

  • ASP.NET 2.0

Summary

Use the questions in this module to help you perform security code reviews on ASP.NET 2.0 applications. Use this question list in conjunction with the module, "How To: Perform a Security Code Review for Managed Code (.NET Framework 2.0)."

Contents

How to Use This Module
What's New in 2.0
SQL Injection
Cross-Site Scripting
Input/Data Validation
Authentication
Forms Authentication
Authorization
Code Access Security
Exception Management
Impersonation
Data Access
Sensitive Data
Cryptography
Unsafe Code
Potentially Dangerous Unmanaged APIs
Auditing and Logging
Additional Resources

How To Use This Module

Use this module to conduct an effective code review for security. Each question category includes a table that matches vulnerabilities to implications, and a set of questions that you can use to determine if your application is susceptible to the listed vulnerabilities. A reference that matches vulnerabilities to questions can be found in the "Vulnerability/Question Matrix" section.

When you use this module, keep the following in mind:

What's New in 2.0

This section describes the most important changes in ASP.NET 2.0 that you should be aware of when you perform a security code review. The main changes include:

  • Details in configuration files, such as connection strings, can be encrypted by using the Aspnet_regiis tool.
  • There are new membership APIs.
  • There are new role APIs.

Use the following questions to make sure that the code uses the new ASP.NET 2.0 features properly:

  • Does the code ensure that the connection strings configuration section is encrypted?
  • Does the code ensure that membership providers use strong passwords?
  • Does the code ensure that the roles cookie is encrypted and checked for integrity?
  • Does the code ensure that the roles cookie has a limited life time?
  • Does the code persist role manager cookies?

Does the code ensure that the connection strings configuration section is encrypted?

When the code specifies the connection string in the connectionStrings configuration section, make sure that the connection strings configuration section is encrypted with the Aspnet_regiis.exe utility.

Because the connection string contains sensitive data, leaving it in plain-text format in configuration files makes your application vulnerable. For more information on encrypting the configuration section, see "How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI" at https://msdn.microsoft.com/library/en-us/dnpag2/html/PAGHT000005.asp and "How To: Encrypt Configuration Sections in ASP.NET 2.0 Using RSA" at https://msdn.microsoft.com/library/en-us/dnpag2/html/PAGHT000006.asp.

Does the code ensure that membership providers use strong passwords?

The code should ensure that the membership provider uses a strong password policy by setting minRequiredPasswordLength, minRequiredNonAlphaNumericCharacters, and passwordStrengthRegularExpression attributes.

Because using weak passwords makes your application vulnerable to brute force and dictionary attacks, it is important to ensure that your Membership feature uses strong passwords.

The application should not contain code similar to the following example.

<membership>
  <providers>
    <add minRequiredPasswordLength="1" .../>
  </providers>
</membership>
  

Instead, the application should contain code similar to the following.

<membership>
  <providers>
    <add minRequiredPasswordLength="7", 
         minRequiredNonAlphanumericCharacters="1", 
         passwordStrengthRegularExpression ="" ... />
  </providers>
</membership>
  

Note   By default, the password strength policy for SqlMembershipProvider and ActiveDirectoryMembershipProvider has a minimum length of 7 characters and 1 non-alphanumeric character.

Make sure that the roles cookie is encrypted and checked for integrity. The cookieProtection attribute should be set to "All", which is the default case.

Cookies that are not encrypted and tamperproof make your application vulnerable to privilege escalation attacks through unauthorized modification of the role data.

The application should not contain code similar to the following example.

<system.web>
  <roleManager cookieProtection="None" ... />
</system.web>
  

Instead, the application should contain code similar to the following.

<system.web>
  <roleManager cookieProtection="All" ... />
</system.web>
  

A shorter life time limits the time when an attacker can use a stolen cookie to access the application.

The application should not contain code similar to the following example.

<system.web>
  <roleManager cookieTimeout="100" ... />
</system.web>
  

Instead, the application should contain code similar to the following.

<system.web>
  <roleManager cookieTimeout="10" ... />
</system.web>
  

Does the code persist role manager cookies?

Make sure that the roles cookie is not stored on the client by setting the createPersistentCookie attribute to false.

The code should not persist role manager cookies because they are stored in the user's profile and can be stolen if an attacker gains physical access to the user's computer. Role manager cookies can reveal sensitive information about your application's role structure that an attacker can exploit.

The application should not contain code similar to the following example.

<system.web>
  <roleManager createPersistentCookie="true" ... />
</system.web>
  

Instead, the application should contain code similar to the following.

<system.web>
  <roleManager createPersistentCookie="false" ... />
</system.web>
  

SQL Injection

Your code is vulnerable to SQL injection attacks wherever it uses input parameters to construct SQL statements. A SQL injection attack occurs when untrusted input can modify the logic of a SQL query in unexpected ways. As you review the code, make sure that any input that is used in a SQL query is validated or that the SQL queries are parameterized. Table 1 summarizes the SQL injection vulnerability and its implications.

Table 1: SQL Injection Vulnerabilities and Implications

Vulnerability Implications
Non-validated input used to generate SQL queries SQL injections can result in unauthorized access, modification, or destruction of SQL data.

The following questions can help you to identify vulnerable areas:

  • Is the application susceptible to SQL injection?
  • Does the code use parameterized stored procedures?
  • Does the code use parameters in SQL statements?
  • Does the code attempt to filter input?

Is the application susceptible to SQL injection?

Pay close attention to your data access code. Scan for the strings "SqlCommand", "OleDbCommand", or "OdbcCommand" to help identify data access code. Identify any input field that you use to form a SQL database query. Check that these fields are suitably validated for type, format, length, and range.

Does the code use parameterized stored procedures?

Check that your code uses parameterized stored procedures and typed parameter objects such as SqlParameter, OleDbParameter, or OdbcParameter. Stored procedures alone cannot prevent SQL injection attacks. The following example shows the use of a SqlParameter.

SqlDataAdapter myCommand = new SqlDataAdapter("spLogin", conn);
myCommand.SelectCommand.CommandType = CommandType.StoredProcedure;
SqlParameter parm = myCommand.SelectCommand.Parameters.Add(
                                "@userName", SqlDbType.VarChar,12);
parm.Value=txtUid.Text;
  

The typed SQL parameter checks the type and length of the input, and it ensures that the userName input value is treated as a literal value and not as executable code in the database.

Does the code use parameters in SQL statements?

If the code does not use stored procedures, make sure that it uses parameters in the SQL statements it constructs, as shown in the following example.

select status from Users where UserName=@userName
  

Check that the code does not use the following approach, where the input is used directly to construct the executable SQL statement by using string concatenation.

string sql = "select status from Users where UserName='"
           + txtUserName.Text + "'";
  

Does the code attempt to filter input?

A common approach is to develop filter routines to add escape characters to characters that have special meaning to SQL. This is an unsafe approach, and developers should not rely on it because of character representation issues.

Cross-Site Scripting

Code is vulnerable to cross-site scripting attacks wherever it uses input parameters in the output HTML stream returned to the client. Even before you conduct a code review, you can run a simple test to determine if your application is vulnerable. Search for pages where user input information is sent back to the browser.

To perform this test, type text, such as XYZ, in form fields and test the output. If the browser displays XYZ or if you see XYZ when you view the HTML source, then your Web application is vulnerable to cross-site scripting. If you want to perform a more dynamic test, inject <script>alert('hello');</script>. This technique might not work in all cases because it depends on how the input is used to generate the output.

Table 2 summarizes cross-site scripting vulnerability and its implications.

Table 2: Cross-Site Scripting Vulnerability and Implications

Vulnerability Implications
Unvalidated and untrusted input in the HTML output stream Cross-site scripting can allow an attacker to execute a malicious script or steal a user's session and/or cookies.

The following questions can help you to identify vulnerable areas:

  • Does the code echo user input or URL parameters back to a Web page?
  • Does the code persist user input or URL parameters to a data store that could later be displayed on a Web page?

Does the code echo user input or URL parameters back to a Web page?

If you include user input or URL parameters in the HTML output stream, you might be vulnerable to cross-site scripting. Make sure that the code validates input and that it uses HtmlEncode or UrlEncode to validate output. Even if a malicious user cannot use your application's UI to access URL parameters, the attacker still may be able to tamper with them.

Reflective cross-site scripting is less dangerous than persistent cross-site scripting due to its transitory nature.

The application should not contain code similar to the following example.

Response.Write( Request.Form["name"] );
  

Instead, the application should contain code similar to the following.

Response.Write( HttpUtility.HtmlEncode( Request.Form["name"] ) );
  

Does the code persist user input or URL parameters to a data store that could later be displayed on a Web page?

If the code uses data binding or explicit database access to put user input or URL parameters in a persistent data store and then later includes this data in the HTML output stream, the application could be vulnerable to cross-site scripting. Check that the application uses HtmlEncode or UrlEncode to validate input and encode output. Pay particular attention to areas of the application that permit users to modify configuration or personalization settings. Also pay attention to persistent free-form user input, such as message boards, forums, discussions, and Web postings. Even if an attacker cannot use the application's UI to access URL parameters, a malicious user might still be able to tamper with them.

Persistent cross-site scripting is more dangerous than reflective cross-site scripting.

Input/Data Validation

If you make unfounded assumptions about the type, length, format, or range of input, your application is unlikely to be robust. Input validation can become a security issue if an attacker discovers that you have made unfounded assumptions. The attacker can then supply carefully crafted input that compromises your application. Table 3 shows a set of common input and/or data validation vulnerabilities and their implications.

Table 3: Input/Data Validation Vulnerabilities and Implications

Vulnerability Implications
Unvalidated and untrusted input in the HTML output stream The application is susceptible to cross-site scripting attacks.
Unvalidated input used to generate SQL queries The application is susceptible to SQL injection attacks.
Reliance on client-side validation Client validation is easily bypassed.
Use of input file names, URLs, or user names for security decisions The application is susceptible to canonicalization issues, which can lead to security flaws.
Application-only filters for malicious input This is almost impossible to do correctly because of the enormous range of potentially malicious input. The application should constrain, reject, and sanitize input.

Use the following questions when you review your code's input and data validation:

  • Does the code validate data from all sources?
  • Does the code centralize its approach?
  • Does the code rely on client-side validation?
  • Does the code accept path or file-based input?
  • Does the code validate URLs?
  • Does the code use MapPath?

Does the code validate data from all sources?

Make sure that your code makes no assumptions about the validity of input data. Code should assume that input data is malicious. Use the following questions to guide your review.

  • Does the code validate form field input?

    The application should not contain code similar to the following example.

    <form id="WebForm" method="post" >
    <asp:TextBox id="txtName" ></asp:TextBox>
    </form>
    
    

    Instead, the text is validated using the RegularExpressionValidator control, as shown in the following example.

    <form id="WebForm" method="post" >
    <asp:TextBox id="txtName"  />
    <asp:RegularExpressionValidator 
       id="nameRegex"
    
       ControlToValidate="txtName"
       ValidationExpression="^[a-zA-Z'.\s]{1,40}$"
       ErrorMessage="Invalid name" 
    /></form>
    
    
  • Does the code validate query string and cookie input?

    The application should use code similar to the following.

    // Request.QueryString
    // Instance method:
    Regex reg = new Regex(@"^[a-zA-Z'.\s]{1,40}$");
    Response.Write(reg.IsMatch(Request.QueryString.Get("Name")));
    
    // Static method:
    if (!Regex.IsMatch(Request.QueryString.Get("Name"),@"^[a-zA-Z'.\s]{1,40}$")) 
    {
      // Name does not match expression
    }
    
    // Request.Cookies
    // Instance method:
    Regex reg = new Regex(@"^[a-zA-Z'.\s]{1,40}$");
    Response.Write(reg.IsMatch(Request.Cookies.Get("Name")));
    
    // Static method:
    if (!Regex.IsMatch(Request.Cookies.Get("Name"),@"^[a-zA-Z'.\s]{1,40}$")) 
    {
      // Name does not match expression
    }
    
    
  • Does the code validate data that is retrieved from a database?

    Your application should validate this form of input, especially if other applications write to the database. Do not make assumptions about how thorough the input validation of the other application is.

    The application should not contain code similar to the following example.

    SqlConnection conn = new SqlConnection(connString);
    SqlCommand sqlCmd = new SqlCommand("SELECT CustomerID FROM Orders WHERE OrderID=1", conn);
    SqlDataReader dr = sqlCmd. ExecuteReader ();
    dr.Read();
    string s = dr[0].ToString();
    int val = Int32.Parse(TextBox1.Text);
    
    

    Instead, the code should contain code similar to the following.

    SqlConnection conn = new SqlConnection(connString);
    SqlCommand sqlCmd = new SqlCommand("SELECT CustomerID FROM Orders WHERE OrderID=1", conn);
    SqlDataReader dr = sqlCmd. ExecuteReader ();
    dr.Read();
    string s = dr[0].ToString();
    if (!(Int32.Parse(s) < 6))
         Response.Write("Throw an error");
    int val = Int32.Parse(TextBox1.Text);
    
    
  • Does the code validate Web method parameters?

    Web services are just as vulnerable as standard Web forms to input manipulation attacks like SQL injection. Make sure your code validates Web method parameters as shown here.

     [WebMethod]
    public decimal RetrieveAccountBalance(string accountId)
    {
      if (!Regex.IsMatch(accountId,@"^[a-zA-Z'.\s]{1,40}$")) 
      {
        // AccountID does not match expression
        // do not process request
      }
    }
    
    

Does the code centralize its approach?

For common types of input fields, examine whether or not the code uses common validation and filtering libraries to ensure that validation rules are performed consistently and it has a single point of maintenance.

Does the code rely on client-side validation?

Client-side validation can reduce the number of round trips to the server, but do not rely on it for security because it is easy to bypass. Validate all input at the server.

It is easy to modify the behavior of the client or just write a new client that does not observe the same data validation rules. Consider the following example.

<html>
<head>
<script language='javascript'>
function validateAndSubmit(form)
{
   if(form.elments["path"].value.length() > 0)
   {
      form.submit();
   }
}
</script>
</head>
<body>
<form action="Default.aspx" method="post">
<input type=text id=path/>
<input type=button onclick="javascript:validateAndSubmit(this.parent)" Value="Submit" />
</form>
</body>
</html>
  

In this example, client-side scripting validates that the length of the "path" is greater than zero. If the server processing of this value relies on this assumption to mitigate a security threat, then the attacker can easily break the system.

Does the code accept path or file-based input?

Determine whether your application uses names that are based on input to make security decisions. If it does, your code is susceptible to canonicalization issues. For example, does it accept user names, file names, or URLs? These are notorious for canonicalization issues because of the many ways that the names can be represented. If your application does accept names as input, make sure that they are validated and converted to their canonical representation before processing.

The application should use code similar to the following.

string fileName = Request.QueryString.Get("filename");      // myFile.txt
string path = Request.QueryString.Get("path");              // @"\mydir\";
string fullPath;
fullPath = System.IO.Path.GetFullPath(path);
fullPath = System.IO.Path.GetFullPath(fileName);
  

Does the code validate URLs?

The application should not contain code similar to the following example.

Uri siteUri = new Uri("https://www.contoso.com/");
// use custom ways of verifying if the uri is absolute
  

Instead, the application should contain code similar to the following.

Uri siteUri = new Uri("https://www.contoso.com/");
//use the .NET Framework Library's IsAbsoluteUri() method
if (siteUri.IsAbsoluteUri())
     //take appropriate action
  

Does the code use MapPath?

Review code for the use of MapPath. MapPath should be used to map the virtual path in the requested URL to a physical path on the server to ensure that cross-application mapping is not allowed.

The application should not contain code similar to the following example.

string mappedPath = Request.MapPath( inputPath.Text, 
                                       Request.ApplicationPath);
  

Instead, the application should contain code similar to the following.

try
{
  string mappedPath = Request.MapPath( inputPath.Text, 
                                       Request.ApplicationPath, false);
}
catch (HttpException)
{
  // Cross application mapping attempted.
}
  

Authentication

Authentication is the process of determining caller identity. Many ASP.NET Web applications use a password mechanism to authenticate users, where the user supplies a user name and password in an HTML form. When you review authentication code, determine whether user names and passwords are sent in plain text over an insecure channel, analyze how user credentials are stored, examine how credentials are verified, and see how the authenticated user is identified after initial logon. Table 4 lists the authentication vulnerabilities and their corresponding security implications.

Table 4: Authentication Vulnerabilities and Implications

Vulnerability Implications
Weak passwords Passwords can be guessed and dictionary attacks can increase.
Clear text credentials in configuration files Insiders who can access the server or attackers who exploit a host vulnerability to download the configuration file have immediate access to credentials.
Passing clear text credentials over the network Attackers can monitor the network to steal authentication credentials and spoof identity.
Over-privileged accounts The risks associated with a process or account compromise increase.
Long sessions The risks associated with session hijacking increase.
Mixing personalization with authentication Personalization data is suited to persistent cookies. Authentication cookies should not be persisted.

The following questions can help you to identify vulnerable areas:

  • Does the code enforce strong user management policies?
  • Does the code restrict the number of failed login attempts?

Does the code enforce strong user management policies?

To determine if the application enforces strong user management policies, consider the following questions:

  • Does the code enforce password complexity rules?

    If the application uses the membership system, check the values of the following attributes to make sure that the code enforces strong passwords:

    • passwordStrengthRegularExpression. The default is "".
    • minRequiredPasswordLength. The default is 7.
    • minRequiredNonalphanumericCharacters. The default is 1.

    Note   These default values are for the SQL Server and the Microsoft Active Directory® directory service membership providers.

    First, the SQL Server and Active Directory providers compare the password to the minRequiredPasswordLength and minRequiredNonalphanumericCharacters attributes. If the regular expression is intended to be the authoritative match, then the other two attributes should have weaker values, such as a minimum length of 1 and 0 non-alphanumeric characters.

    If you do not use ASP.NET membership, verify that the code used to create new user accounts ensures that passwords meet appropriate strength requirements.

  • Do you store passwords in code or in configuration files?

    Verify that your code does not contain hard-coded passwords. Search for text strings such as "password" and "pwd".

Does the code restrict the number of failed logon attempts?

You should consider locking out accounts if a set number of failed logon attempts is exceeded. If you use the SQL membership provider, verify that you have set the following attributes in your provider definition.

  • maxInvalidPasswordAttempts. This defines the number of failed password attempts or failed password answer attempts that are allowed before locking out a user's account. When the number of failed attempts equals the value set in this attribute, the user's account is locked out. The default value is 5.
  • passwordAttemptWindow. This defines the time window, in minutes, during which failed password attempts and failed password answer attempts are tracked. The default value is 10.

Note   If you use the default values, and there are 5 failed login attempts within 10 minutes, the account is locked out.

Forms Authentication

If you use forms authentication, you should review your code to make sure that you properly secure the authentication ticket and that passwords are stored securely in persistent stores. You should also review the code that accesses the user store to make sure that there are no vulnerabilities. Failing to protect authentication tickets is a common vulnerability that can lead to unauthorized spoofing and impersonation, session hijacking, and elevation of privilege. Table 5 lists the forms authentication vulnerabilities and their corresponding security implications.

Table 5: Authentication Vulnerabilities and Implications

Vulnerability Implications
Failure to protect the forms authentication cookie Attacker can authenticate with the stolen cookie.
Forms authentication cookies are shared by multiple applications A user authenticated with one application will also have access to any other application which shares the same authentication cookie.
Passwords are stored in a database in clear-text An attacker with access to the database can steal user authentication credentials.

The following questions can help you make sure that the forms authentication implementation is protected.

  • Does the code use membership?
  • Does the code persist forms authentication cookies?
  • Does the code reduce ticket life time?
  • Does the code use protection="All"
  • Does the code restrict authentication cookies to HTTPS connections?
  • Does the code use SHA1 for HMAC generation and AES for encryption?
  • Does the code use distinct cookie names and paths?
  • Does the code keep personalization cookies separate from authentication cookies?
  • Does the code use absolute URLs for navigation?
  • How does the code store passwords in databases?
  • Does the code partition the Web site into restricted and public access areas?

Does the code use membership?

Determine if the code uses ASP.NET 2.0 membership with forms authentication. This reduces the amount of code that developers need to write to manage user accounts and verify user credentials, and it helps to ensure that the application supports strong user management policies.

Does the code persist forms authentication cookies?

The code should not persist authentication cookies because they are stored in the user's profile and can be stolen if an attacker gets physical access to the user's computer. The code should not do the following:

  • Set the DisplayRememberMe property of the Login control to true.
  • Request a persistent cookie when calling the RedirectFromLoginPage or SetAuthCookie methods of the FormsAuthentication class.
  • Pass true to the FormsAuthenticationTicket constructor to request a persistent cookie.

Does the code reduce ticket life time?

If the code cannot use secure sockets layer (SSL) to protect the forms authentication ticket, does it reduce ticket life time? The default timeout for an authentication cookie is 30 minutes. Review the <forms> definition and consider reducing the timeout as shown in the following example.

<forms 
    timeout="10" 
    slidingExpiration="true" ... />
  

Does the code use protection="All"?

Make sure that your forms authentication tickets are encrypted and integrity checked by setting protection="All" on the <forms> element. This is the default setting, and you can view this in the Machine.config.comments file.

<forms protection="All" ... />
  

Make sure that your application-specific Web.config file does not override this default setting.

Does the code restrict authentication cookies to HTTPS connections?

Ensure that you use SSL with all pages that require authenticated access, and restrict forms authentication tickets to SSL channels by setting requireSSL="true" on the <forms> element, as shown in the following example.

<forms loginUrl="Secure\Login.aspx"
       requireSSL="true" ... />
  

By setting requireSSL="true", you set the secure cookie property that determines whether browsers should send the cookie back to the server. With the secure property set, the browser only sends the cookie to a secure page that is requested by using an HTTPS URL.

Note   If you are using cookieless sessions, you must ensure that the authentication ticket is never transmitted across an unsecured channel.

Does the code use SHA1 for HMAC generation and AES for encryption?

Verify that the application does not override the default algorithm settings documented in the Machine.config.comments file, as shown in the following example.

<machineKey 
   ...
   decryption="Auto" 
   validation="SHA1" />
  

Make sure that the code uses unique name and path attribute values on the <forms> element, as shown in the following example.

<forms name="YourAppName"
       path="/FormsAuth" ... />
  

Does the code keep personalization cookies separate from authentication cookies?

The code should avoid creating a persistent authentication cookie and loading it with user preference data, as shown in the following example.

FormsAuthenticationTicket ticket = new FormsAuthenticationTicket( 1,
                            Context.User.Identity.Name,
                            System.DateTime.Now,
                            System.DateTime.Now.AddMinutes(15),
                            true, 
                            userPreferenceData, 
                            roleStr );
  

Instead, create non-persistent authentication cookies and create a separate personalization cookie, as shown in the following example

// Authentication ticket without user preferences
FormsAuthenticationTicket ticket = new FormsAuthenticationTicket( 1,
                                        Context.User.Identity.Name,
                                        DateTime.Now,
                                        DateTime.Now.AddMinutes(15),
                                        false,
                                        roleStr);
// Create the preferences cookie.
Response.Cookies.Add(new HttpCookie( cookieName, userPreferenceData ));
  

Does the code use absolute URLs for navigation?

Make sure that the code uses absolute links such as https://servername/appname/publicpage.aspx when redirecting from an HTTPS page to an HTTP page. Also verify that when your code redirects to a secure page (for example, the logon page) from a public area of your site, it uses an absolute HTTPS path, such as https:// servername/appname/secure/login.aspx instead of a relative path, such as restricted/login.aspx.

For example, if your Web page provides a logon button, it should use the following code to redirect to the secure login page.

private void btnLogon_Click( object sender, System.EventArgs e )
{
  // Form an absolute path using the server name and v-dir name
  string serverName = 
         HttpUtility.UrlEncode(Request.ServerVariables["SERVER_NAME"]);
  string vdirName = Request.ApplicationPath;
  Response.Redirect("https://" + serverName + vdirName + 
                    "/Restricted/Login.aspx");
}
  

How does the code store passwords in databases?

Check how your code stores passwords. Passwords should be stored as non-reversible hashes with an added random salt value. If your code uses the SqlMembershipProvider for storing passwords, make sure that the passwordFormat attribute is set to Hashed. The configuration setting should look similar to the following.

<membership>
  <providers>
    <add passwordFormat="Hashed" ... />
  </providers>
</membership>
  

Does the code partition the Web site into restricted and public access areas?

If your Web application requires users to complete authentication before they can access specific pages, make sure that the restricted pages are placed in a separate directory away from publicly accessible pages. This allows you to configure the restricted directory to require SSL. It also helps you to ensure that authentication cookies are not passed over unencrypted sessions by using HTTP.

Authorization

Examine the code to make sure that it does not have the authorization vulnerabilities shown in Table 6.

Table 6: Authorization Vulnerabilities and Implications

Vulnerability Implications
Reliance on a single gatekeeper If the gatekeeper is bypassed or is improperly configured, a user can gain unauthorized access.
Failing to lock down system resources against application identities An attacker can coerce the application and access restricted system resources.
Failing to limit database access to specified stored procedures An attacker can mount a SQL injection attack to retrieve, manipulate, or destroy data.
Inadequate separation of privileges There is no accountability or ability to perform per-user authorization.

The following questions can help you to identify vulnerable areas:

  • How does the code protect access to restricted pages?
  • How does the code protect access to page classes?
  • Does the code use Server.Transfer?

How does the code protect access to restricted pages?

If the application uses Windows authentication, has it configured NTFS permissions on the page (or the folder that contains the restricted pages) to allow access only to authorized users?

Is the <authorization> element configured to specify which users and groups of users can access specific pages?

How does the code protect access to page classes?

Are principal permission demands added to classes to specify which users and groups of users can access the classes?

Does the code use Server.Transfer?

Make sure that if the code uses Server.Transfer to transfer a user to another page, the currently authenticated user is authorized to access the target page. If the code uses Server.Transfer to transfer to a page that the user is not authorized to view, the page is still processed. This is because Server.Transfer uses a different module to process the page rather than making another request from the server, which would force authorization.

The code should not use Server.Transfer if security is a concern on the target Web page. It should use HttpResponse.Redirect instead.

Code Access Security

Code access security is a resource-constraint model designed to restrict the types of system resources that the code can access and the types of privileged operations that the code can perform. These restrictions are independent of the user who calls the code or the user account under which the code runs. If the code you are reviewing operates in partially-trusted environments and uses explicit code access security techniques, review it carefully to make sure that code access security is used appropriately. Table 7 shows possible vulnerabilities that occur with improper use of code access security.

Table 7: Code Access Security Vulnerabilities and Implications

Vulnerability Implications
Improper use of link demands or asserts The code is susceptible to luring attacks.
Code allows untrusted callers Malicious code can use the code to perform sensitive operations and access resources.

If your code uses explicit code access security techniques, review it for the following:

  • Does the code use link demands or assert calls?
  • Does the code use AllowPartiallyTrustedCallersAttribute?
  • Does the code use potentially dangerous permissions?
  • Does the code give dependencies too much trust?

Look closely at each use of LinkDemand and Assert calls. These can open the code to luring attacks because the code access stack walk will be stopped before it is complete. While the use of these calls is sometimes necessary for performance reasons, make sure that there can be no un-trusted callers higher in the stack that could use this method's LinkDemand or Assert call as a mechanism for attack.

Does the code use AllowPartiallyTrustedCallersAttribute?

Pay particular attention if the code you are reviewing allows partially trusted callers by including the following attribute.

 [assembly: AllowPartiallyTrustedCallersAttribute()]
  

This allows the assembly to be accessible from calling code that is not fully trusted. If the code you are reviewing then calls an assembly that does not allow partially trusted callers, a security issue could result.

Does the code use potentially dangerous permissions?

Check the code for requests to potentially dangerous permissions such as: UnmanagedCode, MemberAccess, SerializationFormatter, SkipVerification, ControlEvidence / ControlPolicy, ControlAppDomain, ControlDomainPolicy, and SuppressUnmanagedCodeSecurityAttribute.

The following code example uses SuppressUnmanagedCodeSecurityAttribute. You should flag this as a dangerous practice.

 [DllImport("Crypt32.dll", SetLastError=true, CharSet=System.Runtime.InteropServices.CharSet.Auto)]
[SuppressUnmanagedCodeSecurity]
private static extern bool CryptProtectData(
  ref DATA_BLOB pDataIn, 
  string szDataDescr, 
  ref DATA_BLOB pOptionalEntropy,
  IntPtr pvReserved, 
  ref CRYPTPROTECT_PROMPTSTRUCT pPromptStruct, 
  int dwFlags, 
  ref DATA_BLOB pDataOut);
  

Does the code give dependencies too much trust?

Without explicit safeguards, it is possible for an attacker to trick your code into loading a malicious library instead of trusted code. Check to see if all the loaded assemblies are strongly named; this step ensures that tampering cannot occur. Without strong names, your code could be calling into malicious code without knowing it. The use of native code libraries makes this harder to do so be cautious about trusting native code implicitly. Native libraries can be checked with a hash or a certificate. Additionally you should make sure that all libraries are loaded with a complete path in order to avoid canonicalization attacks.

Also check whether delay signing is enabled. Delay signing is generally regarded as a good practice because it helps protect the confidentiality of the private key that will be used for signing the component:

 [assembly:AssemblyDelaySignAttribute(true)]
  

Exception Management

Secure exception handling can help prevent certain application-level denial of service attacks, and can also prevent valuable system-level information useful to attackers from being returned to the client. For example, without proper exception handling, information such as database schema details, operating system versions, stack traces, file names and path information, SQL query strings, and other information of value to an attacker can be returned to the client. Table 8 shows possible vulnerabilities and their implications that can result from poor or missing exception management.

Table 8: Exception Management Vulnerabilities and Implications

Vulnerability Implications
Failing to use structured exception handling The application is more susceptible to denial of service attacks and logic flaws, which can expose security vulnerabilities.
Revealing too much information to the client An attacker can use the information to help plan and tune subsequent attacks.

The following questions can help you to identify vulnerable areas:

  • Does the code handle errors and exception conditions?
  • Does the code use a global error handler?
  • Does the code leak sensitive information in exceptions?
  • Does the application expose sensitive information in user sessions?
  • Does the application fail securely in the event of exceptions?

Does the code handle errors and exception conditions?

Use structured exception handling and catch exception conditions. Doing this improves robustness and avoids leaving the application in an inconsistent state that may lead to information disclosure. It also helps protect the application from denial of service attacks. The code should use finally blocks to ensure that resources are cleaned up and closed even in the event of an exception condition. Developers should determine how to propagate exceptions internally in their code and give special consideration to what occurs at the application boundary.

The application should not contain code similar to the following example.

// Write to a new file
StreamWriter sr = File.CreateText(FILE_NAME);
sr.WriteLine ("This is my file.");
sr.WriteLine ("I can write ints {0} or floats {1}, and so on.", 1, 4.2);
sr.Close();
  

Instead, the application should contain code similar to the following.

// Write to a new file
if (File.Exists(FILE_NAME))
{
  // Error Condition
  logToFile("{0} already exists.", FILE_NAME);
}
else
{
  // Exception handling
  try
  {
    StreamWriter sr = File.CreateText(FILE_NAME);
    sr.WriteLine ("This is my file.");
    sr.WriteLine ("I can write ints {0} or floats {1}, and so on.", 1, 4.2);
    sr.Close();
  }
  catch
  {
    ...
    throw;
  }
  finally
  {
    // resources cleanup
  }
}
  

Does the code use a global error handler?

Make sure that the code includes an application-level global error handler in Global.asax to catch any exceptions that are not handled in code. This can prevent accidentally returning detailed error information to the client. The code should log all exceptions in the event log to record them for tracking and later analysis.

The application should contain code similar to the following:

<%@ Application Language="C#" %>
<%@ Import Namespace="System.Diagnostics" %>
 
<script language="C#" >
void Application_Error(object sender, EventArgs e)
{
  // Get reference to the source of the exception chain
  Exception ex = Server.GetLastError().GetBaseException();
 
  // log the details of the exception and page state to the
  // event log.
  EventLog.WriteEntry( "My Web Application",
                       "MESSAGE: " + ex.Message + 
                       "\nSOURCE: " + ex.Source, 
                       EventLogEntryType.Error);
  // Optional e-mail or other notification here...
 }
 </script>
  

Does the code leak sensitive information in exceptions?

By default, ASP.NET provides too much detail in error messages. Make sure that your application uses custom errors that do not include details that could help an attacker exploit a vulnerability.

The application should not contain code similar to the following example.

<customErrors mode="Off" />
  

Instead, the application should contain code similar to the following.

<customErrors mode="On" defaultRedirect="ErrorPage.htm">
  <error statusCode="404" redirect="NotFoundPage.htm"/>
  <error statusCode="500" redirect="InternalErrorPage.htm"/>
</customErrors>.
  

Does the application expose sensitive information in user sessions?

Review the type of information that could be revealed to an attacker if he or she obtained a session ID. In ASP.NET applications, the session ID is properly randomized so that it is hard to guess session IDs. However, there are other ways an attacker can get this information. Make sure that the session ID is sent over SSL, and make sure that the session timeout is short.

Does the application fail securely in the event of exceptions?

Does the application implement proper exception handling?

The following code example fails l to implement exception handling.

// Code within a System.Web.UI.Page
override protected void OnInit(EventArgs e)
{
  ...
  base.OnInit(e);
}
  

Instead, it should implement page-level or application-level error handlers, as shown in the following example.

// CODE within a ASP.Net Page
override protected void OnInit(EventArgs e)
{
  ...
  this.Error += new System.EventHandler(this.Page_Error);
  base.OnInit(e);
}
  

Impersonation

If the application uses impersonation, make sure that it is properly implemented. Table 9 lists the impersonation vulnerabilities and their security implications.

Table 9: Impersonation Vulnerabilities and Implications

Vulnerability Implications
Revealing service account credentials to the client An attacker could use these credentials to attack the server.
Code executes with higher privileges than expected An attacker can do more damage when code runs with higher privileges.

The following questions can help you to identify vulnerable areas:

  • Does the application use hard-coded impersonation credentials?
  • Does the application clean up properly when it uses impersonation?

Does the application use hard-coded impersonation credentials?

If the code impersonates a service account, it should not pass hard-coded credentials to LogonUser. If the code needs multiple identities to access a range of downstream resources and services, it should use Microsoft Windows Server™ 2003 protocol transition and the WindowsIdentity constructor. This allows the code to create a Windows token that is given only an account's user principal name (UPN). To access a network resource, the code needs delegation. To use delegation, your server needs to be configured as trusted for delegation in Active Directory.

The following code shows how to construct a Windows token using the WindowsIdentity constructor.

using System;
using System.Security.Principal;
public void ConstructToken(string upn, out WindowsPrincipal p)
{
  WindowsIdentity id = new WindowsIdentity(upn);
  p = new WindowsPrincipal(id);
}
  

Does the application clean up properly when it uses impersonation?

If the code uses programmatic impersonation, be sure that it uses structured exception handling and that the impersonation code is inside try blocks. Be sure that catch blocks are used to handle exceptions and that finally blocks are used to ensure that the impersonation is reverted. By using a finally block, the code ensures that the impersonation token is removed from the current thread, whether an exception is generated or not.

The application should not contain code similar to the following example:

try
{  
  ElevatePrivilege();
  // if ReadSecretFile throws an exception privileges will not be lowered
  ReadSecretFile();
  LowerPrivilege();
}
catch(FileException fe)
{
  ReportException();
}
  

Instead, it should contain code similar to the following:

try
{  
  ElevatePrivilege();
  // If ReadSecretFile throws an exception privileges will not be lowered
  ReadSecretFile();
}
catch(FileException fe)
{
  ReportException();
}
finally
{
  LowerPrivilege();
}
  

Sensitive Data

If the code you are reviewing uses sensitive data, such as connection strings and account credentials, you should make sure that the code protects the data and ensures that it remains private and unaltered. Table 10 shows a set of common sensitive data vulnerabilities and their implications.

Table 10: Sensitive Data Vulnerabilities and Implications

Vulnerability Implications
Storing secrets when you do not need to This drastically increases the security risk. Do not store secrets unnecessarily.
Storing secrets in code If the code is on the server, an attacker may be able to download it. Secrets are visible in binary assemblies.
Storing secrets in clear text Anyone who can log on to the server can see secret data.
Passing sensitive data in clear text over networks Eavesdroppers can monitor the network to reveal and tamper with the data.

The following questions help you to identify vulnerable areas:

  • Does the code store secrets?
  • Is sensitive data stored in predictable locations?
  • Does the code store sensitive data in view state?
  • Does the code pass sensitive data across Web pages?

Does the code store secrets?

If an assembly stores secrets, review the design to make sure that it is absolutely necessary to store the secret. If the code must store a secret, review the following questions to make sure that it does so as securely as possible:

  • Does the application store secrets in code?

    Are there secrets or critical intellectual property embedded in the code? Managed code is easy to decompile. It is possible to recover code from the final executable that is very similar to the original code. Any sensitive intellectual property or hard coded secrets can be stolen with ease. An obfuscator can make this type of theft more difficult, but cannot entirely prevent it. Another common problem is to use hidden form fields thinking this information will not be visible to the user.

    The following is an example of bad code containing hard-coded account credentials:

    IntPtr tokenHandle = new IntPtr(0);
    IntPtr dupeTokenHandle = new IntPtr(0);
    string userName = "joe", domainName = "acmecorp", password="p@Ssw0rd";
    const int LOGON32_PROVIDER_DEFAULT = 0;
    // This parameter causes LogonUser to create a primary token.
    const int LOGON32_LOGON_INTERACTIVE = 2;
    const int SecurityImpersonation = 2;
    tokenHandle = IntPtr.Zero;
    dupeTokenHandle = IntPtr.Zero;
    // Call LogonUser to obtain a handle to an access token.
    bool returnValue = LogonUser(userName, 
                                 domainName, 
                                 password, 
                                 LOGON32_LOGON_INTERACTIVE, 
                                 LOGON32_PROVIDER_DEFAULT,
                                 ref tokenHandle);
    
    
  • How does the code encrypt secrets?

    Verify that the code uses DPAPI to encrypt connection strings and credentials. Do not store secrets in the Local Security Authority (LSA), because the account used to access the LSA requires extended privileges. For information on using DPAPI, see "How To: Create a DPAPI Library" in the "How To" section of Microsoft patterns & practices Volume I, Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication or How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI.

  • Does the code store secrets in the registry?

    If the code stores secrets in HKEY_LOCAL_MACHINE, verify that the secrets are first encrypted and then secured with a restricted ACL. An ACL is not required if the code stores secrets in HKEY_CURRENT_USER because this registry key is automatically restricted to processes running under the associated user account.

  • Does the code eliminate secrets from memory?

    Look for failure to clear secrets from memory after use. Because the common language runtime (CLR) manages memory for you, this is actually harder to do in managed code than it used to be in native code. To make sure that secrets are adequately cleared, verify that the following steps have been taken:

    • Strings should not be used to store secrets; they cannot be changed or effectively cleared. Instead the code should use a byte array or a CLR 2.0 SecureString.
    • Whatever type the code uses, it should call the Clear method as soon as it is finished with the data.
    • If the secret is paged to disk, it can persist for long periods of time and be difficult to completely clear. Make sure that GCHandle.Alloc and GCHandleType.Pinned are used to keep the managed objects from being paged to disk.

Is sensitive data stored in predictable locations?

Sensitive data should be stored and transmitted in encrypted form; anything less invites theft. For example, a common error is to store database server passwords in the ASP.NET Web.config file, as shown in the following example.

<!-- web.config -->
<connectionStrings>
  <add name="MySQLServer" 
       connectionString="Initial Catalog=finance;data 
       source=localhost;username='Bob' password='pwd';"
       providerName="System.Data.SqlClient"/>
 </connectionStrings>
  

Instead, the connection strings should be encrypted with the Aspnet_regiis utility. The command syntax is:

aspnet_regiis -pe "connectionStrings" -app "/MachineDPAPI" -prov "DataProtectionConfigurationProvider"

The Web.config file after encryption should be similar to the following.

<!—web.config after encrypting the connection strings section -->
...
<connectionStrings configProtectionProvider="DataProtectionConfigurationProvider">
  <EncryptedData>
    <CipherData>
<CipherValue>AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAexuIJ/8oFE+sGTs7jBKZdgQAAAACAAAAAAADZgAAqAAAABAAAAAKms84dyaCPAeaSC1dIMIBAAAAAASAAACgAAAAEAAAAKaVI6aAOFdqhdc6w1Er3HMwAAAAcZ00MZOz1dI7kYRvkMIn/
BmfrvoHNUwz6H9rcxJ6Ow41E3hwHLbh79IUWiiNp0VqFAAAAF2sXCdb3fcKkgnagkHkILqteTXh</CipherValue>
    </CipherData>
  </EncryptedData>
</connectionStrings>
...

Similarly, the code should not store forms authentication credentials in the Web.config file, as illustrated in the following example.

<authentication mode="Forms">
   <forms name="App" loginUrl="/login.aspx">
      <credentials passwordFormat = "Clear" 
         <user name="UserName1" password="Password1"/>
         <user name="UserName2" password="Password2"/>
         <user name="UserName3" password="Password3"/>
      </credentials>
   </forms>
</authentication>
  

Instead, use an external store with well-controlled access, such as Active Directory or a SQL Server database.

Does the code store sensitive data in view state?

The code should not store sensitive data in view state. It should use server-side storage instead. If the application must store sensitive data in view state, make sure that view state is tamper proof and encrypted. Verify the following settings: enableViewStateMac="true" (the default setting) and viewStateEncryptionMode="Auto". Also verify that the code invokes Page.RegisterRequiresViewStateEncryption when encryption is required.

Do not use code similar to the following example.

<!-- machine.config fragment -->
<pages buffer="true" enableSessionState="true"
       enableViewState="true" enableViewStateMac="false"
       autoEventWireup="true" validateRequest="true"/>
  

Instead, use code similar to the following.

<!-- machine.config fragment -->
<pages buffer="true" enableSessionState="true"
       enableViewState="true" enableViewStateMac="true"
       autoEventWireup="true" validateRequest="true"
       viewStateEncryptionMode="Auto" />
  

Does the code pass sensitive data across Web pages?

The code should not use a hidden object to store sensitive data, and then use view state to pass it across pages. The following bad code illustrates this practice.

...
protected System.Web.UI.WebControls.TextBox sensitiveDataField;
clientSessionId.Visible = false;
...
if (userIsAuthorized)
{
  sensitiveDataField.Text = sensitiveData.ToString();
}
...
  

Instead, use server-side state management to pass sensitive data, as shown in the following example.

<!-- Web.config file fragment -->
<sessionState mode="InProc" ... cookieless="true" />
...
if (userIsAuthorized)
{
  Session[sensitiveDataId] = sensitiveData.ToString();
}
...
  

Data Access

If the application accesses a database, make sure that data access is implemented securely. Table 11 lists the data access vulnerabilities and their security implications.

Table 11: Data Access Vulnerabilities and Implications

Vulnerability Implications
Failing to protect database connection strings The database can be compromised.
Using over-privileged accounts to access SQL Server An attacker can use the extended privileges to execute commands at the database.

The following questions can help you to identify vulnerable areas:

  • Does the application use SQL authentication?
  • How does the application store database connection strings?

Does the application use SQL authentication?

The application should avoid using SQL authentication and should use Windows authentication where possible.

The code should use connection strings with Trusted_Connection="Yes" or Integrated Security="SSPI", as shown in the following example.

// Uses thread's security context to connect using Windows authentication
"Server=YourServer;Database=YourDatabase;Trusted_Connection='Yes';"
// Same as above
"Server=YourServer;Database=YourDatabase;Integrated Security=SSPI;"
  

The application should not use connection strings that contain user names and passwords, and in particular, it should avoid using highly privileged accounts, such as the sa account, and blank passwords. The code should avoid using the following connection strings.

// Contains user names and passwords
"Server=YourServer;Database=YourDatabase;uid=username;pwd=pwd;"
// Uses the sa account
"Server=YourServer;Database=YourDatabase;uid=sa;pwd=pwd;"
// Uses blank passwords
"Server=YourServer;Database=YourDatabase;uid=username;pwd=;"
  

How does the application store database connection strings?

Verify that connection strings are stored in encrypted format. This is particularly important if the connection strings contain credentials. If the application uses Windows authentication, encrypting the strings protects the database server name. However, you should weigh this benefit against the increased deployment complexity.

If the application uses the <connectionStrings> section of the Web.config file to store connection strings, make sure that this section is encrypted with the Aspnet_regiis tool.

Cryptography

Review the code to see whether it uses cryptography to provide privacy, non-repudiation, tampering, or authentication. Table 12 shows the vulnerabilities that can be introduced if cryptography is used inappropriately.

Table 12: Cryptography Vulnerabilities and Implications

Vulnerability Implications
Using custom cryptography This is less secure than the tried and tested platform-provided cryptography.
Using the wrong algorithm or too small a key size Newer algorithms increase security. Larger key sizes increase security.
Failing to secure encryption keys Encrypted data is only as secure as the encryption key.
Using the same key for a prolonged period of time A static key is more likely to be discovered over time.

The following questions help you to identify vulnerable areas:

  • Does the code use custom cryptographic algorithms?
  • Does the code use the right algorithm with an adequate key size?
  • How does the code manage and store encryption keys?
  • Does the code generate random numbers for cryptographic purposes?

Does the code use custom cryptographic algorithms?

Look for custom cryptographic routines. Make sure that the code uses the System.Security.Cryptography namespace. Cryptography is notoriously tricky to implement correctly. The Windows Crypto APIs are implementation of algorithms derived from years of academic research and study. Some think that a less well-known algorithm is more secure; however, this is not true. Cryptographic algorithms are mathematically proven, and those that have received more review are generally more effective. An obscure, untested algorithm does not protect your flawed implementation from a determined attacker.

Does the code use the correct algorithm and an adequate key size?

Review your code to see what algorithms and key sizes it uses. Review the following questions:

  • Does the code use symmetric encryption?

    If so, check that it uses Rijndael (now referred to as Advanced Encryption Standard [AES]) or Triple Data Encryption Standard (3DES) when encrypted data needs to be persisted for long periods of time. Use the weaker (but quicker) RC2 and DES algorithms only to encrypt data that has a short lifespan, such as session data.

  • Does the code use the largest key sizes possible?

    Use the largest key size possible for the algorithm you are using. Larger key sizes make attacks against the key much more difficult, but can degrade performance.

How does the code manage and store encryption keys?

Look for poor management of keys. Flag hard-coded key values: leaving these in the code will help to ensure that cryptography is broken. Make sure that key values are not passed from method to method by-value because this will leave many copies of the secret in memory to be discovered by an attacker.

Does the code generate random numbers for cryptographic purposes?

Look for poor random number generators. You should make sure that the code uses System.Security.Cryptography.RNGCryptoServiceProvider to generate cryptographically secure random numbers. The Random class does not generate truly random numbers that are not repeatable or predictable.

Unsafe Code

Pay particularly close attention to any code compiled with the /unsafe switch. This code does not have all of the protection normal managed code is given. Look for potential buffer overflows, array out of bound errors, integer underflow, and overflow, as well as data truncation errors. Table 13 shows possible vulnerabilities that can be introduced in unsafe code.

Table 13: Unsafe Code Vulnerabilities and Implications

Vulnerability Implications
Buffer overrun in unmanaged code or code marked /unsafe Allows arbitrary code execution by using the privileges of the running application.
Integer overflow in unmanaged code or code marked /unsafe Unexpected calculation results in system instability or allows an attacker to read arbitrary memory.
Format string problem in unmanaged code or code marked /unsafe An attacker can read or modify arbitrary memory.
Array out of bounds in unmanaged code or code marked /unsafe Failure to check array bounds before access can allow an attacker to read arbitrary memory.
Data truncation in unmanaged code or code marked /unsafe Unexpected data truncation can result in system instability or allow an attacker to read arbitrary memory.

Review unsafe code by using the following questions:

  • Is the code susceptible to buffer overruns?
  • Is the code susceptible to integer overflows?
  • Is the code susceptible to format string problems?
  • Is the code susceptible to array out of bound errors?

Is the code susceptible to buffer overruns?

Buffer overruns are a vulnerability that may lead to execution of arbitrary code. While tracing through unmanaged or unsafe code, make sure that the following rules are followed:

  • Make sure any functions that copy variable-length data into a buffer and use a maximum length parameter properly.
  • Make sure that the code does not rely on another layer or tier for data truncation.
  • If you see a problem, make sure the code truncates the data instead of expanding the buffer to fit it. Buffer expansion may just move the problem downstream.
  • Make sure any unmanaged code was compiled with the /GS option.

The application should not contain code similar to the following example.

public void ProcessInput()
{
  char[] data = new char[255];  
  GetData(data);
}

public unsafe void GetData(char[] buffer)
{
 int ch = 0;
 fixed (char* pBuf = buffer)
 {
   do
   {
     ch = System.Console.Read();
     *(pBuf++) = (char)ch;
   } while(ch != '\n');
 }
}
  

In this code example, an overflow occurs whenever a single line is more than 255 characters long. There are two problems in this code:

  • The ProcessInput function allocates only enough space for 255 characters.
  • The GetData function does not check the size of the array as it fills it.

Is the code susceptible to integer overflows?

This problem occurs if a calculation causes a data value to be larger or smaller than its data type allows. This will cause the value to wrap and become much larger or smaller than expected. As you review the data through unmanaged or unsafe code, make sure that any location where a user can give input that results in a calculation does not cause an underflow or overflow condition.

The application should not contain code similar to the following example.

int[] filter(uint len, int[] numbers)
{
  uint newLen =  len * 3/4;
  int[] buf = new int[newLen];
  int j = 0;
  for(int i = 0; i < len; i++)
  {
    if (i % 4 != 0)
    buf[j++] = numbers[i];
  }
  return buf;
}
  

The problem in this example is that, in calculating the value for len, the code first computes len * 3 and then divides by 4. When len is large enough (about 1.4 billion), len * 3 overflows and newLen is assigned a value that is too small. The result is out of range array access in the buf array.

Is the code susceptible to format string problems?

Format string problems are caused by the way that the printf functions handle variables and by the %n format directive. While you review unmanaged or unsafe code, make sure that format string data never contains user input.

The application should not contain code similar to the following example.

void main (int argc,
           char **argv)
{
  /* Whatever the user said, spit back! */
  printf (argv[1]);
}
  

In this example, untrusted input in the form of a command line parameter is passed directly to a printf statement. This means that an attacker could include format string % directives in the string, and force the application to return or modify arbitrary memory in the stack.

Is the code susceptible to array out of bound errors?

Array indexing errors, such as buffer overruns can lead to memory being overwritten at arbitrary locations. This can lead to application instability or, with a carefully constructed attack, can lead to code injection. While tracing through unmanaged or unsafe code, make sure that the following rules are followed:

  • With C/C++ code, make sure that indexes run from zero to n-1, where n is the number of array elements.  
  • Where possible, make sure that code does not use input parameters as array indices.
  • Make sure that any input parameters used as array indices are validated and constrained to ensure that the maximum and minimum array bounds cannot be exceeded.

Potentially Dangerous Unmanaged APIs

In addition to the checks performed for unsafe code, you should review unmanaged code for the use of potentially dangerous APIs such as strcpy and strcat. (A complete list of potentially dangerous APIs is listed in Table 15.) Be sure to review any interop calls, as well as the unmanaged code itself, to make sure that bad assumptions are not made as execution control passes from managed to unmanaged code. Table 14 shows potential vulnerabilities that can arise in unmanaged code.

Table 14: Unmanaged API Vulnerabilities and Implications

Vulnerability Implications
A potentially dangerous unmanaged API is called improperly An attacker could exploit the weakness in the potentially dangerous API to gain access to arbitrary memory locations or run arbitrary code.

Does the code call potentially dangerous unmanaged APIs?

Potentially dangerous unmanaged functions can be categorized as follows:

  • Unbound Functions (UF). These functions do not expect an explicit bound parameter for the number of bytes that might be modified for one of their parameters. These are typically the most dangerous functions and should never be used.
  • NULL Terminated Functions (NTF). These functions require a NULL terminated string. If they are provided a string without NULL termination, they could overwrite memory. If the code uses NULL terminated functions, make sure that the loop does not have an additional placeholder for NULL; for example, for(i = 0; i <= 512; i++) should be < 512 not <= 512.
  • Non-NULL Terminated Functions (NNTF). The output of most string functions is NULL terminated; however, the output of a few is not. These require special treatment to avoid programming defects. If the code uses non-NULL terminated functions, make sure that the loop does have an additional placeholder for NULL.
  • Format Functions (FF). Format string functions allow a programmer to format their input and output. If the format is not provided, data can be manipulated and can lead to programming defects.

Table 15 shows a range of potentially dangerous unmanaged APIs and the associated categories into which they fall.

Table 15: Potentially Dangerous Unmanaged APIs

Functions Category
Strcpy UF, NTF
Strcat UF, NTF
Strcat NTF
Strlen NTF
Strncpy NNTF
Strncat NNTF
Strcmp NTF
Strcmp NTF
Mbcstows NNTF
_strdup NTF
_strrev NTF
Strstr NTF
Strstr NTF
Sprintf FF, NTF
_snprintf FF, NTF
Printf FF, NTF
Fprintf FF, NTF
Gets UF
Scanf FF, NTF
Fscanf FF, NTF
Sscanf FF, NTF
Strcspn NTF
MultiByteToWideChar NNTF
WideCharToMultiByte NNTF
GetShortPathNameW NTF
GetLongPathNameW NTF
WinExec NTF
CreateProcessW NTF
GetEnvironmentVariableW NTF
SetEnvironmentVariableW NTF
SetEnvironmentVariableW NTF
ExpandEnvironmentStringsW NTF
SearchPathW NTF
SearchPathW NTF
SearchPathW NTF
Lstrcpy UF, NTF
Wcscpy UF, NTF
_mbscpy UF, NTF
StrCpyA UF, NTF
StrCpyW UF, NTF
lstrcatA UF, NTF
lstrcatW UF, NTF
Wcscat UF, NTF
_mbscat UF, NTF
Wcslen NTF
_mbslen NTF
_mbstrlen NTF
lstrlenA NTF
lstrlenW NTF
Wcsncpy NNTF
_mbsncpy NNTF
StrCpyN NNTF
lstrcpynW NTF
lstrcatnA NTF
lstrcatnW NTF
Wcsncat NTF
_mbsncat NTF
_mbsnbcat NTF
lstrcmpA NTF
lstrcmpW NTF
StrCmp NTF
Wcscmp NTF
_mbscmp NTF
Strcoll NTF
Wcscoll NTF
_mbscoll NTF
_stricmp NTF
lstrcmpiA NTF
lstrcmpiW NTF
_wcsicmp NTF
_mbsicmp NTF
StrCmp NTF
_stricoll NTF
_wcsicoll NTF
_mbsicoll NTF
StrColl NTF
_wcsdup NTF
_mbsdup NTF
StrDup NTF
_wcsrev NTF
_mbsrev NTF
_strlwr NTF
_mbslwr NTF
_wcslwr NTF
_strupr NTF
_mbsupr NTF
_wcsupr NTF
Wcsstr NTF
_mbsstr NTF
Strspn NTF
Wcsspn NTF
_mbsspn NTF
Strpbrk NTF
Wcspbrk NTF
_mbspbrk NTF
Wcsxfrm NTF
Wcscspn NTF
_mbcscpn NTF
Swprintf FF
wsprintfA FF
wsprintfW FF
Vsprintf FF
Vswprintf FF
_snwprintf FF
_vsnprintf FF
_vsnwprintf FF
Vprintf FF
Vwprintf FF
Vfprintf FF
Vwfprintf FF
_getws UF
Fwscanf FF
Wscanf FF
Swscanf FF
OemToCharA UF, NTF
OemToCharW UF, NTF
CharToOemA UF, NTF
CharToOemW UF, NTF
CharUpperA NTF
CharUpperW NTF
CharUpperBuffW NTF
CharLowerA NTF
CharLowerW NTF
CharLowerBuffW NTF

Auditing and Logging

Does the code use auditing and logging effectively? Table 16 shows the vulnerabilities that can be introduced if auditing and logging are not used or if they are used incorrectly

Table 16: Auditing and Logging Vulnerabilities and Implications

Vulnerability Implications
Lack of logging It is difficult to detect and repel intrusion attempts.
Sensitive data revealed in logs An attacker could use logged credentials to attack the server or could steal other sensitive data from the log.
  • Does the application use health monitoring?
  • Does the application log sensitive data?

Does the application use health monitoring?

ASP.NET version 2.0 introduces a health monitoring feature that you should use to log and audit events. By default, health monitoring is enabled for ASP.NET version 2.0 applications and all Web infrastructure error events (inheriting from System.Web.Management.WebErrorEvent) and all audit failure events (inheriting from System.Web.Management.WebFailureAuditEvent) are written to the event log. The default configuration is defined in the <healthMonitoring> element in the machine-level Web.config.comments file. To audit additional security related events, you create custom event types by deriving from one of the built-in types. For more information, see How To: Use Health Monitoring in ASP.NET 2.0.

Does the application log sensitive data?

Review the code to see if sensitive details are logged. Credentials and sensitive user data should not be logged. Applications might work with information that requires higher privileges to view than the log file does. Exposing sensitive data in a log file makes it more likely that the data will be stolen.

Vulnerability/Question Matrix

Table 17 associates the vulnerabilities with the corresponding code review questions. Use this table to determine a set of questions to ask during code review if you are concerned about a specific vulnerability.

Table 17: Vulnerability/Question Matrix

Vulnerability Questions
SQL Injection  
Non-validated input used to generate SQL queries Is your application susceptible to SQL injection?
  Does your code use parameterized stored procedures?
  Does your code use parameters in SQL statements?
  Does your code attempt to filter input?
Cross-Site Scripting  
Unvalidated and untrusted input in the HTML output stream Does the code echo user input or URL parameters back to a Web page?
  Does the code persist user input or URL parameters to a data store that could later be displayed on a Web page?
Input / Data Validation  
Reliance on client-side validation Does the code rely on client-side validation?
Use of input file names, URLs, or user names for security decisions Is the code susceptible to canonicalization attacks?
Application-only filters for malicious input Does the code validate data from all sources?
  Does the code centralize its approach?
  Does the code validate URLs?
  Does the code use MapPath?
Authentication  
Weak passwords Does the code enforce strong user management policies?
Clear text credentials in configuration files Does the code enforce strong user management policies?
  Does the code partition the Web site into restricted and public access areas?
Passing clear text credentials over the network Does the code use protection="All" ?
  Does the code restrict authentication cookies to HTTPS connections?
  Does the code use SHA1 for HMAC generation and AES for encryption?
Long sessions Does the code reduce ticket life time?
Mixing personalization with authentication Does the code keep personalization cookies separate from authentication cookies?
  Does the code use distinct cookie names and paths?
Forms Authentication  
Failure to protect the forms authentication cookie Does the code persist forms authentication cookies?
  Does the code reduce ticket life time?
  Does the code use protection="All" ?
  Does the code restrict authentication cookies to HTTPS connections?
  Does the code use SHA1 for HMAC generation and AES for encryption?
  Does the code keep personalization cookies separate from authentication cookies?
Forms authentication cookies are shared by multiple applications Does the code use distinct cookie names and paths?
Passwords are stored in a database in clear-text How does the code store passwords in databases?
Authorization  
Reliance on a single gatekeeper How does the code protect access to restricted pages?
  How does the code protect access to page classes?
  Does the code use Server.Transfer?
Code Access Security  
Improper use of link demands or asserts Does the code use link demands or assert calls?
Code allows untrusted callers Does the code use AllowPartiallyTrustedCallers Attribute?
  Does the code use dangerous permissions?
  Does the code give dependencies too much trust?
Exception Management  
Failing to use structured exception handling Does the code handle errors and exception conditions?
  Does the application fail securely in the event of exceptions?
Revealing too much information to the client Does the application expose sensitive information in user sessions?
Impersonation  
Revealing service account credentials to the client Does the code use hard-coded impersonation credentials?
Code executes with higher privileges than expected Does the code clean up properly when it uses impersonation?
Sensitive Data  
Storing secrets in code Does the code store secrets?
Storing secrets in clear text Is sensitive data stored in predictable locations?
Passing sensitive data in clear text over networks Does the code store secrets?
Data Access  
Failing to protect database connection strings Does the code use SQL authentication?
Using over-privileged accounts to access SQL Server How does the code store database connection strings?
Cryptography  
Using custom cryptography Did the team develop cryptographic algorithms?
Using the wrong algorithm or too small a key size Does the code use the right algorithm with an adequate key size?
  Does the code generate random numbers for cryptographic purposes?
Failing to secure encryption keys How does the code manage and store encryption keys?
Using the same key for a prolonged period of time How does the code manage and store encryption keys?
Unsafe Code  
Buffer overrun in unmanaged code or code marked /unsafe Is the code susceptible to buffer overruns?
Integer overflow in unmanaged code or code marked /unsafe Is the code susceptible to integer overflows?
Format string problem in unmanaged code or code marked /unsafe Is the code susceptible to format string problems?
Array out of bounds in unmanaged code or code marked /unsafe Is the code susceptible to array out-of-bound errors?
Potentially Dangerous Unmanaged APIs  
A potentially dangerous unmanaged API is called improperly Does the code call potentially dangerous unmanaged APIs?
Auditing and Logging  
Sensitive data revealed in logs Does the code log sensitive data?

Additional Resources

Feedback

Provide feedback by using either a Wiki or e-mail:

We are particularly interested in feedback regarding the following:

  • Technical issues specific to recommendations
  • Usefulness and usability issues

Technical Support

Technical support for the Microsoft products and technologies referenced in this guidance is provided by Microsoft Support Services. For product support information, please visit the Microsoft Support Web site at https://support.microsoft.com.

Community and Newsgroups

Community support is provided in the forums and newsgroups:

To get the most benefit, find the newsgroup that corresponds to your technology or problem. For example, if you have a problem with ASP.NET security features, you would use the ASP.NET Security forum.

Contributors and Reviewers

  • External Contributors and Reviewers: Akshay Aggarwal; Anil John; Frank Heidt; Jason Schmitt, SPI Dynamics; Keith Brown, Pluralsight; Loren Kornfelder
  • Microsoft Product Group: Don Willits, Eric Jarvi, Randy Miller, Stefan Schackow
  • Microsoft IT Contributors and Reviewers: Shawn Veney
  • Microsoft EEG: Eric Brechner, James Waletzky
  • Microsoft patterns & practices Contributors and Reviewers: Carlos Farre, Jonathan Wanagel
  • Test team: Larry Brader, Microsoft Corporation; Nadupalli Venkata Surya Sateesh, Sivanthapatham Shanmugasundaram, Infosys Technologies Ltd.
  • Edit team: Nelly Delgado, Microsoft Corporation
  • Release Management: Sanjeev Garg, Microsoft Corporation

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.