Security Briefs

Beware of Fully Trusted Code

Keith Brown

Contents

Does the CLR Verify All Code Before Running It?
Are Private Methods Really Private?
Does IStackWalk.Deny Have Teeth?
Interlude: AppDomain Security Policy
Do AppDomains Have Teeth?
Does Fully Trusted Code Have Any Limitations?
Conclusion

The vast majority of managed applications run with full trust, but based on my experience teaching .NET security to developers with a broad range of experience, most really don't understand the implications of fully trusted code. So I've pulled together a number of examples where fully trusted code can skirt around common language runtime (CLR) security features, starting each with a question that seems to have an obvious answer.

Does the CLR Verify All Code Before Running It?

Lots of security features in the CLR depend on type safety. As a simple example, consider private members of a class. The CLR restricts access to them at run time, but it can only do so if the caller uses legitimate means to try to access those members. Consider the following unmanaged C++ class definition:

class DiskQuota { private: long MinBytes; long MaxBytes; };

I've chosen unmanaged C++ here because it shows the types of fraud that can be perpetrated when type systems are violated:

void EvilCode(DiskQuota* pdq) { // use pointer arithmetic to index // into the object wherever we like! ((long*)pdq)[1] = MAX_LONG; }

By using pointers and unsafe casts, the attacker is able to access and even modify private members of a class. There is no runtime protection against this in unmanaged code. However, the CLR was designed to detect this sort of type system abuse during just-in-time (JIT) compilation (unsafe pointer arithmetic in this particular case) and raise an exception. A buffer overflow is another example of a type system violation, one that can lead to nasty security holes. The CLR closes all of these holes by ensuring the type safety of all code, right?

Yes, unless you shoot yourself in the foot by granting the assembly too much trust. Any assembly can mark itself with a flag, technically called a permission request, which tells the CLR to skip verification. Here's an example in C# that does just this:

// evilcode.cs using System.Security.Permissions; [assembly: SecurityPermission( SecurityAction.RequestMinimum, Flags=SecurityPermissionFlag. SkipVerification)] // your evil code goes here

You may even end up with this permission request on your assemblies without knowing it. If you use the /unsafe switch to compile C# code that uses pointers, the compiler silently adds this permission request. If you build an assembly in managed C++, the same thing happens by default, although this behavior can be changed with new compiler flags in the next version of C#, code-named "Whidbey." When the CLR successfully loads an assembly with a SkipVerification permission request, it will skip the type safety verification that normally happens during JIT compilation. The Framework Class Library uses this feature; even MSCORLIB.DLL, the core .NET Framework assembly, makes this permission request. When making an omelet, some eggs must be broken, or so the saying goes.

What you have to decide is which assemblies you trust with the power to ignore the type system and which you don't. Your preference is expressed via .NET security policy. Any assemblies that run with FullTrust, which by default includes the .NET Framework itself as well as assemblies installed on the local hard drive, will be allowed to skip verification upon request. Assemblies downloaded from the network will be denied this request by the default policy.

I don't know about you, but knowing this makes me want to run every assembly I can with partial trust.

Are Private Methods Really Private?

In my last example, I showed how fully trusted code could skip verification and use pointer arithmetic to read or write private member variables of any class. Well, it turns out that you don't need pointers to do this sort of thing. Using reflection is another way to achieve the same result (see Figure 1).

Figure 1 Using Reflection to Read/Write Variables

using System; using System.Reflection; class EvilCodeWithFullTrust { static void CallPrivateMethod(object o, string methodName) { Type t = o.GetType(); MethodInfo mi = t.GetMethod(methodName, BindingFlags.NonPublic | BindingFlags.Instance); mi.Invoke(o, null); } static void Main() { CallPrivateMethod(new NuclearReactor(), "Meltdown"); } }

Surely the common language runtime will protect your private methods by throwing an exception when another class tries to call them, even via reflection, right? Once again, the only way that you can protect your classes against this sort of funny business is via .NET security policy. The important permission in this case is ReflectionPermission. Fully trusted code has this permission in the dose required to reflect on private members.

Think about what this means. Say you ship an assembly that includes sensitive private methods that are for your internal use only. What's to keep your client from calling those methods directly, perhaps passing malformed input that could cause your methods to malfunction and behave dangerously? Nothing in this case, because you have no control over your client's security policy. But wait! Perhaps you've heard of a feature called a strong name LinkDemand. The Framework Class Library uses it, so it's got to be good. Here it is:

public class NuclearReactor { [StrongNameIdentityPermission( SecurityAction.LinkDemand, PublicKey="002400000...")] private void Meltdown() { // calling assembly must have specified public key! } }

Now the previous trick using reflection won't work. Nobody can call Meltdown except assemblies that have the specified public key. A link demand directs the CLR to check the assembly linking to the method at JIT time to ensure that it can satisfy the demand. By using a StrongNameIdentityPermission, I am requiring that the caller has a particular public key.

Unfortunately, fully trusted code can get around this permission demand as well, in more than one way. The simplest option is to turn off code access security (CAS) entirely with the following command-line option:

caspol -s off

You have to be an administrator to do this, thank goodness (and even if you are one, never do it!) In fact, there are so few good reasons for this to exist that the Whidbey team is considering removing it from the next release. But even a non-administrator can write the following program to effectively do the same thing if the application has a high level of trust:

using System.Security; class EvilCodeWithFullTrust { static void Main() { SecurityManager.SecurityEnabled = false; // now call Meltdown via reflection! } }

These mechanisms for turning off CAS have the same effect, although the former turns it off on the entire machine while the latter only affects a single process—although there are no guarantees that it will work reliably. See the documentation for more information. The result is that all demands succeed, even link demands like the one shown earlier. The second example demonstrates yet another very dangerous code access permission, namely a SecurityPermission called ControlPolicy. Any code that's been granted this permission can turn off code access security for the process with a single line of code. Fully trusted code has all permissions, including this one.

But maybe you don't want to stoop to this level. It turns out that you don't have to. Just delay-sign your assembly with the public key being demanded and turn off strong name verification for that key. Microsoft uses the strong name LinkDemand in core framework libraries like MSCORLIB.DLL, but it's never stopped me from calling one of their private methods in my investigations of the CLR.

I hope I'm making it clear that it's impossible to give someone an assembly and restrict them from calling certain methods or accessing certain variables. Unless, of course, you control the machine on which they are running your code, in which case you can force them to run in a partially trusted environment. Lots of security guarantees disappear in a fully trusted environment.

Does IStackWalk.Deny Have Teeth?

It's a little-known fact that you can choke down the permissions on your call stack by using a deny modifier:

using System.Security; using System.Security.Permissions; class WellMeaningCode { public void CallPlugIn(EvilCode plugin) { // put a CAS modifier on the stack that denies all file system access new FileIOPermission( PermissionState.Unrestricted).Deny(); plugin.DoWork(); CodeAccessPermission.RevertDeny(); } }

The idea here is that even though your WellMeaningCode may be fully trusted, you may not trust the plug-in or some other third-party extension, so you reduce your permissions temporarily before making calls to that plug-in. But if the plug-in is granted full trust by security policy, this will only stop a terribly naive author of evil plug-in code. Here's an implementation that gets around the deny:

class EvilCodeWithFullTrust { void DoWork() { new PermissionSet( PermissionState.Unrestricted).Assert(); // happily access the file system // regardless of the caller's deny! } }

Fully trusted code is very slippery! Assert is another stack modifier that effectively cancels out the Deny. But what code is allowed to Assert? Any code that's been granted a SecurityPermission called Assertion. And, of course, that includes all fully trusted code. So trying to use a Deny (or a PermitOnly for that matter) in assembly A to constrain assembly B is pointless if B has been granted FullTrust. To give Deny some teeth, run it with partial trust.

Interlude: AppDomain Security Policy

If you are familiar with the .NET Framework Configuration Tool, you have probably seen three security policy levels: Enterprise, Machine, and User. But there's a fourth level that's quite useful for dynamically sandboxing code like the plug-in that I mentioned earlier. I don't have enough space in this column to drill into this topic in much depth, but I wanted to point it out to you in case you've never seen it before.

The fourth level is associated with the AppDomain, and by default it adds no restrictions to the set of permissions granted by the Enterprise, Machine, and User policy levels. But if you want to sandbox a piece of code, say a plug-in that might be installed on the local machine, it's possible to do this by creating a second AppDomain and calling AppDomain.SetAppDomainPolicy. The policy itself can be loaded from a file via SecurityManager.LoadPolicyLevelFromFile. If you think you might want to use this technique, the best reference I've seen on the topic so far is in a book written by Brian LaMacchia, Sebastian Lange, and some other folks at Microsoft called .NET Framework Security (Addison-Wesley, 2002). Pay special attention to Chapter 10 on hosting.

Do AppDomains Have Teeth?

It wouldn't surprise me if the entire notion of an AppDomain was necessitated by ASP.NET, which uses them to isolate Web applications within a process. If you're not familiar with the AppDomain, it is conceptually similar to a process, but has a much lighter weight. When a process hosts the CLR, it always has a default AppDomain where all managed types are loaded. But by creating multiple AppDomains, a process can play host to several different applications, with a high degree of confidence that types in one AppDomain cannot wreak havoc on types in any other AppDomains in the process. Sharing object references between AppDomains requires the use of remoting, and it doesn't happen automatically. If you and I are in separate AppDomains and I want to touch your objects, you've got to pass a reference to me. I can't poke around without being invited. That's the theory at least.

In practice, assemblies that run with full trust within an AppDomain are really not constrained by the AppDomain boundary. Fully trusted assemblies don't have to be type safe. Heck, if I wanted to, I could simply take a pointer and scrape through the entire virtual address space of the process looking for interesting data.

Another thing to keep in mind is that fully trusted assemblies are allowed to call to native code via a SecurityPermission called UnmanagedCode. This makes dipping into someone else's AppDomain really easy. Say you and I are running in separate AppDomains in an ASP.NET worker process. If I want to inject my own code into your AppDomain, it's pretty easy if I'm fully trusted. All I need is a reference to your AppDomain and I can use something as simple as AppDomain.DoCallBack to send some attack code into your domain. The code in Figure 2 shows a somewhat contrived example where I cheat a bit by handing the attacker a reference to the victim's AppDomain.

Figure 2 Executing Code in Another AppDomain

// YourCode.cs --> YourCode.dll using System; public class Init : MarshalByRefObject { // entry point for victim's AppDomain public Init() { Console.WriteLine("YourCode is running in {0}", AppDomain.CurrentDomain.FriendlyName); } } public class NuclearReactor { // some function we don't want an attacker to call private static void Meltdown() { Console.WriteLine("Reactor meltdown!"); } } public class SecretData { private static string TheData = "555-55-5555"; } // MyCode.cs --> MyCode.dll using System; using System.Reflection; public class Init : MarshalByRefObject { // entry point for attacker's AppDomain public Init(AppDomain target) { Console.WriteLine("MyCode is running in {0}", AppDomain.CurrentDomain.FriendlyName); Console.WriteLine("Injecting code into {0}...", target.FriendlyName); // here's how we inject the code target.DoCallBack( new CrossAppDomainDelegate(InjectedAttackCode)); } public static void InjectedAttackCode() { Console.WriteLine("InjectedAttackCode in {0}", AppDomain.CurrentDomain.FriendlyName); // time to melt down the nuclear reactor using reflection Type t = Type.GetType("NuclearReactor, YourCode"); MethodInfo mi = t.GetMethod("Meltdown", BindingFlags.Static | BindingFlags.NonPublic); mi.Invoke(null, null); // steal secret data from the victim t = Type.GetType("SecretData, YourCode"); FieldInfo fi = t.GetField("TheData", BindingFlags.Static | BindingFlags.NonPublic); Console.WriteLine("Found a secret: {0}", fi.GetValue(null)); } } // host.cs --> host.exe using System; using System.Reflection; class Host { static void Main() { AppDomain victim = AppDomain.CreateDomain("Victim's Domain"); AppDomain attacker = AppDomain.CreateDomain("Attacker's Domain"); victim.CreateInstance("YourCode", "Init"); attacker.CreateInstance("MyCode", "Init", false, BindingFlags.Public | BindingFlags.Instance, null, new object[]{victim}, null, null, null); } }

How can one AppDomain pilfer a reference to another AppDomain in order to play these sorts of games? This can be accomplished by calling out to unmanaged code. The CLR provides an unmanaged hosting API, and by writing a DLL in C++ and calling out to it from managed code, you can obtain references to all AppDomains in the process.

What it comes down to is this: AppDomains can provide no security when all the code is fully trusted. You need a partial-trust environment to give AppDomains any real teeth. Partial trust requires verification of managed code and restricts access to native code. By default, ASP.NET server applications run with full trust because all the code is installed right there on the server. In fact, prior to version 1.1 of the .NET Framework, it wasn't even possible to run ASP.NET applications with anything less than full trust. Nowadays you can easily force Web applications on the server to run with partial trust via machine.config:

<configuration> <system.web> <trust level='Medium'/> </system.web> </configuration>

But frankly I doubt you'll find many ISPs doing this because not many developers have the knowledge and the patience necessary to write partially trusted Web apps. I'm hopeful this will change though. For now, if you decide to host your ASP.NET app in a shared environment, you should ask lots of questions about how your Web app will be isolated. You should prefer Windows Server™ 2003 to Windows® 2000 because ASP.NET only supports a single worker process in the latter case. Be sure the ISP gives you your own private worker process with its own dedicated Windows user account. Some ISPs running Windows 2000 will isolate Web apps from one another using virtual machines. Just be sure to ask so you know what you're getting into. Of course, your best bet is to host your app on your own dedicated Web server.

Does Fully Trusted Code Have Any Limitations?

After all this doom and gloom you might be getting the impression that there's nothing that fully trusted code can't do. But remember, the CLR still runs on top of an operating system that should have its own security constraints. When I talk about fully trusted code, what I am really saying is that the code can do anything the user running it is allowed to do. Say Bob runs a managed application that's installed locally on his machine. As far as the CLR is concerned, by default that application runs with full trust. But if Bob's logon to Windows prevents him from accessing a particular file, the managed application will have the same constraints. Trust in the CLR is a sliding scale whose upper bound, what's called full trust, reaches the level of privilege of the user running the application. For server applications like ASP.NET, this upper bound depends on the security account chosen for the server process, which is one reason that I prefer Windows Server 2003; it allows me to run each ASP.NET application in separate processes, each with any level of privilege I want.

Conclusion

The goal of this column was to demonstrate that many of the security features of the CLR can only be enforced in a partial-trust environment. While the notion of full trust might seem obvious to some, I've reviewed plenty of designs that make assumptions about CLR security that simply don't fly in a full trust scenario. If you compare the CLR's built-in security to Windows built-in security, running with full trust is akin to running as SYSTEM. Fully trusted code can get around all of the CLR's built-in security features. That's why it's called fully trusted—it must be trusted to do the right thing. SYSTEM can get around any security constraint in Windows, which is why code running as SYSTEM must be trusted.

My challenge to you is to learn to write code that runs in a partial-trust environment. Ivan Medvedev has some good advice at Writing managed code for semi-trusted environment. If what I saw at the PDC is any indication, this will be an important skill for the next version of Windows, code-named "Longhorn," where even locally installed code may not run with full trust. Remember the principle of least privilege, and design and code with it in mind.

Send your questions or comments for Keith at  briefs@microsoft.com.

Keith Brown is an independent consultant specializing in application security. He authored the book Programming Windows Security (Addison Wesley, 2000) and is writing a new security book for .NET programmers. Read the new book online at https://www.develop.com/kbrown.