Programming for Partial Trust

 

Keith Brown
Microsoft Corporation

February 2006

Applies to:
   Microsoft Visual Studio

Summary: Security expert Keith Brown takes a practical look at what you need to know in order to survive in a partially trusted world. (12 printed pages)

Contents

Introduction
CAS Encounters of the First Kind
Where Can I Store My Stuff?
Isolated Storage
Network Communication
GUI
Stuff You Simply Won't Be Able to Do
Gateways
What's New in Visual Studio 2005
Conclusion

Introduction

This is a practical look at what you need to know in order to survive in a partially trusted world. I'll discuss programming strategies such as how to store state with Isolated Storage, and some tools you can use today to detect potential problems before they occur. I'll also give you a sneak peek at some new features coming in Visual Studio 2005 that will simplify developing for a partially trusted environment.

CAS Encounters of the First Kind

Most developers get a rather rude first introduction to partial trust when they accidentally run their projects from a network share as opposed to their local drive. Because default policy grants restricted permissions to code that's not on the local drive, suddenly things that used to work in the program start throwing security exceptions. Actions that are obviously security related, such as calls to File.Open, now throw SecurityExceptions. But many things that don't appear to have any security sensitivity also begin throwing that same exception, such as the PropertyGrid control in a Windows Forms application. The newsgroups abound with these problems.

Often the solution is simply, "Run the program from your local drive," which works if you're one of the people who accidentally created their project on a network share. But if you're purposely trying to deploy from the network using, say, no-touch deployment, this clearly isn't an option!

There are two practical solutions you can choose in this case. The first is to modify policy to grant your program the permissions it needs to run. In fact, if you have the ability and desire to simply grant your program full trust, then you don't need to keep reading this article! But often times you either won't have the clout to get the system administrator to change the .NET security policy being deployed, or you may find that changing policy is simply too cumbersome to be worth the hassle. In that case, another option is to simply accept whatever permissions are granted and try to write a program that works well even though it's granted only a limited set of permissions.

Writing such a program is a challenge at the moment, because sometimes it's difficult to know what permissions your application will really need. For example, if you read the documentation for File.Open, at the bottom you'll find a requirement that you have FileIOPermission in order to call this method. But, at least as of this writing, you'll find no such note on the PropertyGrid class, and yet it requires full trust to even instantiate!

The documentation and tools are improving. For example, in version 2.0 of the framework, there's a new tool called PermCalc that will analyze an assembly and calculate what permissions it requires. But even with these tools, you'll still need to stay alert in order to avoid surprises later on. Prototyping and testing under partial trust conditions are tremendously important, as you don't want to discover that a feature doesn't work under partial trust after you've already invested heavily in that feature.

In this article, I'll take you on a tour of some of the troubles you'll likely have when writing partially trusted applications, and give some advice on how to cope. I will assume the worst: that you're running a program in the Internet or Local Intranet zones, and don't have the option to change security policy. If your program can be written to live in the confines of these policies, you'll be a very happy camper, because it'll just work without any need for policy changes.

Where Can I Store My Stuff?

Very few programs can get along without storing any state at all. Databases, directories, and file systems all serve as repositories for application state, but as you probably guessed, all three are considered sensitive and will be almost completely off limits to a program running with low trust.

If you need to access a database or directory without policy tweaks, you'll need to do it indirectly. While you won't have permission to talk to SQL Server or Active Directory directly, you will have permission to communicate over the network by using a Web service. So consider exposing a Web service to access your data.

If you need to store state on the local machine, first consider what type of state it is. For files that you want the user to be able to see and manipulate directly, like a document or image file, you'll need to create files that the user can easily find. Since you'll be running with partial trust, your best bet is to use common file dialogs to open and save these files. This requires less privilege than opening the files silently, because the user is in charge of what files you are allowed to use. Here's an example that opens and reads a file using OpenFileDialog.

private void button1_Click(object sender,
                           System.EventArgs e) {
    OpenFileDialog dlg = new OpenFileDialog();
    if (DialogResult.OK == dlg.ShowDialog(this)) {
        using (Stream s = dlg.OpenFile()) 
        using (StreamReader r = new StreamReader(s)) {
            textBox1.Text = r.ReadToEnd();
        }
    }
}

Note that now I used OpenFileDialog.OpenFile to open the file stream. Also note that I avoided using OpenFileDialog.FileName, because I'll most likely not be allowed to know anything about the shape of the user's file system when I'm running with partial trust, and reading or writing that property would most likely result in a SecurityException being thrown back at me.

Sometimes you need to store application settings that are completely transparent to the user. In this case, popping up a file dialog is inappropriate. Instead, use Isolated Storage to get a FileStream to which you can write your settings and read them later.

Isolated Storage

Isolated Storage is termed "isolated" because each assembly sees what appears to be its own private file system that no other assembly can see (assuming partial trust—with full trust, there's no stopping one assembly from reading another's isolated storage).

There are different levels of isolation, and you need to pick which one is appropriate to your program and stick with it. You are required to isolate at least based on user and assembly. User isolation is required because the framework keeps isolated storage files in a subdirectory of the user's profile. The framework maintains assembly isolation by making sure each assembly has its own subdirectory in the user's profile that it sees as the root of its isolated file store.

You can further isolate based on the application into which you are loaded. This is referred to as "Domain" isolation, which can be a bit misleading. All this means is that your subdirectory will be chosen not just based on your assembly name, but also the assembly name of the managed EXE that has loaded your code. This really is only interesting for DLLs that are shared between applications. As an example, at this level of isolation, Math.dll will see a different isolated file system when loaded into Calculator.exe than when it's loaded by SalesForcaster.exe.

One other option you have when choosing an isolation level is whether you want your files to roam if the user has a roaming profile (I like to think of this as isolation by computer).

Depending on the level of trust you have, not all of these isolation levels will be available to you. By default in the LocalIntranet zone, you are only required to isolate based on Assembly and User, which means you are free to choose whether you want the files to roam and whether you want to isolate further based on the host application (Domain isolation). But in the Internet zone, by default you are forced into the most isolated mode: Assembly, User, Domain, and Computer.

Once you figure out what isolation level makes sense for you, actually using Isolated Storage is really quite easy. If you know how to use a FileStream, you're in luck, because IsolatedStorageFileStream inherits from FileStream. In fact, today's implementation really does just give you a subdirectory in the user's profile to play in, so you're technically just dealing with normal files in a normal file system under the covers. If you already have code that opens normal files, reads or writes them, and closes them, the only code you need to change to use Isolated Storage is the code that actually opens the file. For example, look at this generic code that loads a form's settings from an XML DOM:

void LoadSettings(Stream s) {
    XmlDocument dom = new XmlDocument();
    dom.Load(s);
    string x = dom.SelectSingleNode("/settings/x").InnerText;
    string y = dom.SelectSingleNode("/settings/y").InnerText;
    this.Location = new Point(Int32.Parse(x), Int32.Parse(y));
    // ...
}

This code remains the same regardless of whether you're using a normal file or a file from isolated storage. The only difference physically is where the file resides on the disk, and programmatically how you open that file. Here's the rest of the code you'd need to load your settings from Isolated Storage:

private void Form1_Load(object sender, System.EventArgs e) {
    using (Stream s = ReadFileFromIsoStore("settings.xml")) {
        LoadSettings(s);
    }
}

FileStream ReadFileFromIsoStore(string path) {
    IsolatedStorageFile fileSystem =
        IsolatedStorageFile.GetStore(MY_SCOPE, null, null);
    return new IsolatedStorageFileStream(path,
        FileMode.Open, FileAccess.Read, fileSystem);
}

const IsolatedStorageScope MY_SCOPE =
    IsolatedStorageScope.User |
    IsolatedStorageScope.Assembly |
    IsolatedStorageScope.Domain;

Note that the scope I've chosen here (MY_SCOPE) is the safest, or most isolated level. It doesn't roam, and isolates not only based on User and Assembly, but also on Domain. This will allow me to read and write my settings even when I'm running in the Internet zone.

One thing that drastically changes between the LocalIntranet and Internet zone is the quota of bytes you are allowed to write to your isolated store. This may bite you in the future if you're not careful, because these quotas are not currently enforced on version 1.1 of the .NET Framework. The default quota for the LocalIntranet zone is essentially unlimited, but the quota for the Internet zone only allows an assembly to write 10K bytes total to Isolated Storage. So keep an eye on the size of the data you're writing.

Another gotcha about Isolated Storage is that, unless your assembly has a strong name, it will be identified based on its URL evidence. If the URL from where you download your assembly changes, so will the isolated storage directory. You might just find that all of your application's settings disappear! And if you decide to use a strong name, be aware that this name includes a version number, so if you change the version number, the same thing happens. If you want to allow users to bring their old settings forward, you need to plan ahead!

Network Communication

When you're running with partial trust, access to the network will be severely limited. You won't be allowed to access remote databases, because that requires a database permission that's not normally granted through default policy. You won't be able to use .NET Remoting or COM+, both of which require full trust to use.

Default policy includes a special code group called NetCodeGroup that grants limited access to the network in the LocalIntranet and Internet zones. This code group generates a dynamic WebPermission grant based on the evidence for the assembly. Let's say I was to download a smart client application from my Web site at https://www.pluralsight.com. In this case, NetCodeGroup would calculate a WebPermission that would allow the app to make HTTP or HTTPS requests back to pluralsight.com. This means my code can use an HttpWebRequest in the raw, or a Web service that uses either of these two protocols. Note that if my smart client was instead downloaded using HTTPS (instead of HTTP) my code would be restricted to using HTTPS requests only. This restriction helps satisfy the user's expectation of a base level of security, given that HTTPS was used to download the code in the first place.

One example of a gotcha that you might run into is that the Proxy property on HttpWebRequest demands that you are granted unrestricted WebPermission. Like the OpenFileDialog's FileName property, this is an example of a feature of a class that requires different permissions than the class itself.

GUI

One thing that often surprises smart client developers is that many GUI components require certain permissions to function properly under partial trust. For example, say you build a PropertyGrid into your smart client application. Things work great while you're developing the app, but when you deploy it in a partial trust environment, that PropertyGrid starts throwing SecurityExceptions!

Internally, the grid uses reflection to figure out the shape of the class it's representing, and to get and set properties on the class. But reflection is not restricted as long as you're only reflecting against public members, and the PropertyGrid only displays public members, so apparently that's not what's causing those exceptions. If you run PERMVIEW.EXE (a tool that ships with the .NET Framework SDK) on System.Windows.Forms.dll, you'll find a link demand on the PropertyGrid class that requires anyone linking to it to have unrestricted permissions. In other words, partially trusted callers are simply not allowed to use the PropertyGrid. Whenever you see a link demand like this, that's a warning that the code it's protecting probably could be used by partially trusted code to elevate permissions in some way.

As of this writing, the documentation for PropertyGrid doesn't mention that it requires any particular permissions, although I've submitted a request to get this omission corrected. The point is that the documentation probably isn't going to warn you 100 percent of the time, and even if it did, with Intellisense, we are relying less and less on documentation anyway! Fortunately the next version of Visual Studio is addressing this, as I'll demonstrate later in this article.

Ultimately, testing under partial trust is the best way to discover these sorts of problems early. Prototype under partial trust and be sure to test early, test often!

Stuff You Simply Won't Be Able to Do

Not all of the .NET Framework supports partial trust. You can tell when a framework assembly doesn't support partial trust because it won't have the AllowPartiallyTrustedCallers attribute (APTCA) on it. Awhile back I wrote a program called FindAPTC, which is still available at www.pluralsight.com/samples.aspx. This program enumerates all DLLs in the current directory and tells you which ones have APTCA and which ones don't. Running this in the v1.1.4322 subdirectory under the .NET Framework gives the output shown in the following code. Some of the more notable subsystems that don't support partial trust are .NET Remoting, System.EnterpriseServices, System.DirectoryServices, System.Management, and System.Messaging. If you try to use any of these assemblies from partially trusted code, you'll get slapped with a link demand for full trust.

FindAPTC output for version 1.1 of the .NET Framework

Allows partially trusted callers:
Accessibility.dll
IEExecRemote.dll
Microsoft.JScript.dll
Microsoft.VisualBasic.dll
Microsoft.Vsa.dll
mscorlib.dll
System.Data.dll
System.dll
System.Drawing.dll
System.Web.dll
System.Web.Mobile.dll
System.Web.RegularExpressions.dll
System.Web.Services.dll
System.Windows.Forms.dll
System.XML.dll

Does NOT allow partially trusted callers:
cscompmgd.dll
CustomMarshalers.dll
envdte.dll
IEHost.dll
IIEHost.dll
ISymWrapper.dll
Microsoft.VisualBasic.Compatibility.Data.dll
Microsoft.VisualBasic.Compatibility.dll
Microsoft.VisualBasic.Vsa.dll
Microsoft.VisualC.Dll
Microsoft.Vsa.Vb.CodeDOMProcessor.dll
Microsoft_VsaVb.dll
mscorcfg.dll
office.dll
RegCode.dll
System.Configuration.Install.dll
System.Data.OracleClient.dll
System.Design.dll
System.DirectoryServices.dll
System.Drawing.Design.dll
System.EnterpriseServices.dll
System.Management.dll
System.Messaging.dll
System.Runtime.Remoting.dll
System.Runtime.Serialization.Formatters.Soap.dll
System.Security.dll
System.ServiceProcess.dll
vjscor.dll
VJSharpCodeProvider.DLL
vjslib.dll
vjslibcw.dll
vjswfc.dll
VJSWfcBrowserStubLib.dll
vjswfccw.dll
vjswfchtml.dll

Even in the assemblies that do support partial trust, there are exceptions. Some classes are annotated with FullTrust link demands to restrict their use to fully trusted code. A notable example is the PropertyGrid mentioned earlier. In other cases, you'll see demands for permissions that only fully trusted code should have, often UnmanagedCode. For example, trying to get the native window handle from a Windows Forms control will fail under partial trust for this very reason. Often times if what you're doing feels like it's skirting around the .NET Framework (like accessing window handles directly) it probably is, and you'll likely not be allowed to do it under partial trust.

Although it won't report all possible partial trust failures, here's a quick way to find classes or members of classes with link demands for full trust or particularly high-privileged permissions. Try running PERMVIEW with the /decl option on all of the framework assemblies you plan on using, looking in particular for "LinktimeDemand" entries. Running this on System.Windows.Forms.dll would have shown you right away that the PropertyGrid wouldn't be friendly to partially trusted code (I've omitted namespaces for brevity).

Class PropertyGrid LinktimeDemand permission set:
<PermissionSet class="PermissionSet"
               version="1"
               Unrestricted="true"/>

This means that at link time, the calling assembly must be granted the permission set listed here, and this is what the FullTrust grant looks like.

Gateways

In certain cases it may make sense for a partially trusted application to be granted special permissions. A simple stock ticker application that is deployed from one Web site might need to retrieve its stock quotes from an entirely different Web site. If that's acceptable, an administrator can adjust policy to grant the stock ticker application the WebPermission it needs to connect to the stock quote provider Web site. This is an example of a fine-grained adjustment to policy.

But what if the functionality you need can't be achieved with a fine-grained policy adjustment? What if you need to use System.EnterpriseServices to call a COM+ component on a remote machine? This is a really common requirement in many enterprises! What are you to do, require the administrator to grant the application full trust? That's certainly one approach, and it's certainly the easiest approach. But if you prefer to run with partial trust, you might consider designing what I call a gateway.

There are many gateways in the .NET Framework already. Think, for example, how the FileStream class must work: it's written in managed code, it's always going to be running in a fully-trusted assembly (the .NET Framework itself is always fully trusted). But it may be called by partially trusted callers, and the only requirement is that they have the correct FileIOPermission for the file they are trying to access.

The FileStream class acts as a gateway by demanding a fine-grained permission (FileIOPermission in this case), and then asserting a more course-grained permission (SecurityPermission for UnmanagedCode in this case) and opening the file by calling out to the operating system on behalf of the partially trusted caller. The assertion is required because permission demands normally walk up the stack and require all callers up the stack frame to have the permission being demanded. Since the FileStream class acts as a gateway, it asserts its own permission to call to unmanaged code so that its callers don't need that permission.

So what does all this have to do with you? You can write your own gateway classes if you want to. Figure 1 shows the architecture for a COM+ stock ticker gateway.

ms364059.prtltrstpro01(en-US,VS.80).gif

Figure 1. A COM+ gateway

The managed wrapper around the COM+ component is packaged in a fully-trusted assembly (this is the sort of thing that's typically installed in the Global Assembly Cache). A class called StockTickerPermission derives from CodeAccessPermission and gives your gateway a custom fine-grained permission to demand before asserting the right to call to unmanaged code. In this way, the system administrator can gate access to the COM+ component by means of policy by either granting StockTickerPermission or not, as opposed to resorting to a full-trust grant. Of course, writing such a gateway is a big responsibility, and because it must be fully trusted, it blows your chances of a completely "no-touch" deployment. But a gateway class is an option that you should know about, because it may get you out of a bind someday.

What's New in Visual Studio 2005

The new version of Visual Studio has some features designed to make it easier to build partially trusted applications. The first feature is called Intellisense in Zone, and it basically allows you to select a zone for your code, after which all intellisense menus will gray out methods, properties, and classes that are not available in that zone. So if you plan to deploy in the Internet zone, this would help you avoid choosing features that would simply fail in that zone. Figure 2 shows what this looks like.

ms364059.prtltrstpro02(en-US,VS.80).gif

Figure 2. Intellisense in Zone

Another new feature is a tool called PERMCALC.EXE, which is also integrated into Visual Studio. This tool looks at an assembly, examines the code, and determines which permissions the assembly will require in order to run. Figure 3 shows how this integrates into the Visual Studio project property page. The tool even tries to incorporate imperative permission requests such as assertions and demands into its calculation.

ms364059.prtltrstpro03(en-US,VS.80).gif

Figure 3. PERMCALC

Conclusion

Building applications that run under partial trust takes patience and careful planning, but the rewards are great. Deployment is much easier, as you can use no-touch deployment and often times not need to touch security policy on the user's machine, because your code naturally operates within the default permission set it's been granted (typically this is the Local Intranet zone for enterprise applications). If your application requires no changes to security policy, it's got a much greater chance of actually being used within an enterprise.

The real keys to success are awareness, prototyping, and rigorous testing under partial trust conditions. I'd recommend spending some time familiarizing yourself with the various permissions that the .NET Framework defines. Then you'll have a much better intuition for what types of operations are considered sensitive in the first place. An easy way to get started is to spend some time with the .NET Framework Config tool. Create a permission set of your own and use the editor to add each different type of permission, one by one, so you can see the different options that each permission exposes. It's an eye opening experience. And keep an eye out for a section in the documentation (usually down at the bottom of the page) entitled "Requirements, .NET Framework Security." When you see this, you know that feature requires a permission to operate. Make sure that's a permission you'll have at runtime!

© Microsoft Corporation. All rights reserved.