DirectX 9.0

Introducing the New Managed Direct3D Graphics API in the .NET Framework

Yahya H. Mirza and Henry da Costa

This article assumes you're familiar with C++ and DirectX

Level of Difficulty123

SUMMARY

DirectX 9.0 is the latest evolution of the Microsoft 3D graphics technology for Windows. Direct3D, a major component of the DirectX Graphics subsystem, has evolved so rapidly in the last few years that the underlying programming paradigm has changed quite a bit from its origin. This article introduces the fundamental concepts of the unmanaged Direct3D architecture and illustrates how the managed Direct3D layer abstracts the unmanaged layer. Also, the author describes the Geometry, Texture, Device, and other classes and uses code from the Samples SDK.

Contents

The Direct3D Architecture
The Direct3D Pipeline
Unmanaged Direct3D
Direct3D Extensions
Managed Direct3D
The Device Class
Graphics State Classes
Geometry-related Classes
Texture Classes
Managed D3DX Library
The Managed DirectX 9.0 Samples Framework
The GraphicsSample Class
Initialization and Cleanup
Frame Updating and Rendering
Conclusion

A fundamental challenge of a hardware-accelerated graphics API is to enable application developers to take advantage of the rapid technology advances occurring in the 3D hardware space while allowing a certain amount of compatibility and uniformity across graphics hardware solutions. One way to do this is to define a standard by committee and then have each vendor support that standard. Graphics hardware vendors can innovate and create proprietary extensions through an agreed upon extension mechanism. Over time, the hardware vendor can lobby the standards body to accept their proprietary extension as part of the standard. OpenGL version 1.1 is an example of this approach to hardware interoperability. One limitation is that it can take a long time to get vendor-specific innovations incorporated into a multi-vendor standard, thereby taking the risk that the standard will become obsolete.

In DirectX® 9.0, the features of DirectDraw® and Direct3D® are combined into a single API called DirectX Graphics. The Direct3D portions of this component will be our primary focus in this article. In Microsoft® Direct3D, the programmer has two options: rely on a fixed-function pipeline or a programmable pipeline. The fixed-function pipeline itself depends on existing algorithms standardized by Direct3D. These fixed functions are exposed through a fixed set of enumeration values similar to OpenGL. This implies that the fixed-function pipelines of both Direct3D and OpenGL use internal switch statements. Some of the cases that correspond to the enumeration values in the switch statement may be hardware accelerated, based on the capabilities of the graphics card on which the runtime relies. In Direct3D, when using the fixed-function pipeline, the programmer first checks with the runtime to see if the graphics card supports a particular capability.

Because some graphics cards do not support all of the features exposed through Direct3D, a mechanism is provided in Direct3D to probe the graphics hardware. If a particular graphics capability is not supported by the hardware, the check will fail, allowing the programmer to look for a different hardware-accelerated algorithm. The key point to remember is that the Direct3D fixed-function pipeline exposes hardware-accelerated features. Although Direct3D has a software-only emulation mode called the reference device, it is designed for debugging and feature testing purposes—not for shipping applications.

The other, more interesting approach to the problem of hardware and software coevolution is the programmable pipeline. In the programmable pipeline, rather than picking a predefined enumeration value and asking Direct3D to perform the algorithm, the programmer can define his own algorithm. The runtime will dynamically compile the algorithm for the underlying graphics hardware. In this case, the Direct3D runtime has a just-in-time (JIT) compiler, which is an explicit part of the hardware device driver. Hardware vendors are responsible for providing a JIT compiler for their particular graphics hardware. Direct3D thus serves as a graphics virtual machine, which effectively virtualizes the graphics processor (GPU) with a custom graphics instruction set.

The Direct3D Architecture

Although both the managed and unmanaged Direct3D programming layers are exposed through a series of objects, you should not consider them an application-level programming framework. The primary role of the Direct3D architecture is to provide the base functionality for a higher-level solution such as a game engine or a scene graph API. To help implement these solutions, the Direct3D extension library explicitly provides additional functionality. To best understand the Direct3D architecture you must understand not only the functionality that is being abstracted but also how that functionality is organized and exposed. In the next several sections, we'll provide an overview of the basic elements of the Direct3D architecture and discuss how they are exposed through both the unmanaged COM API and the managed .NET abstraction layer.

The Direct3D Pipeline

As is common to computer hardware architectures, two performance optimization techniques used in 3D graphics architectures are pipelining and parallelizing. The algorithms exposed through Direct3D are logically organized into a pipeline. The Direct3D pipeline is illustrated in Figure 1.

Figure 1 Direct3D Graphics Pipeline

The Direct3D pipeline should be viewed as a set of algorithms that operate on 3D geometric quantities, which in the case of Direct3D are predefined vertices and primitives. The main purpose of the pipeline is to convert geometric data into an image that's rendered on the video display. The Direct3D tessellation stage is used to tessellate (convert to triangles) a fixed set of higher-order primitives that Direct3D predefines, which include triangle patches, rectangle patches, and N patches (although triangle patches remain the most common form of geometry). Currently, the tessellation stage is not programmable, so Direct3D does not expose any mechanisms to enable procedural geometry generation on the graphics hardware. Procedural geometry provides a large number of benefits with respect to minimizing the data that is sent across the bus. In the near future, you are likely to see hardware support for a programmable tessellation stage.

The transform and lighting stage of the pipeline transforms vertex positions and normals from the model coordinate system to the world and camera coordinate systems. This occurs through the world and view transformations. Per-vertex lighting calculations are performed to determine the specular and diffuse color components. The vertex positions are then transformed by the projection transformation to create a perspective, orthographic, or other type of projection. Although the fixed-function pipeline still exposes these transform and lighting algorithms in the same API convention as before, in most graphics cards they are being implemented in the microcode of the graphics processor as opposed to being a set of hardware-accelerated algorithms. In the Radeon 9700 processor, for example, the entire transform and lighting module of the past can and should be implemented in the programmable pipeline as vertex shaders.

To improve performance during the rasterization stage, any vertices belonging to objects that are not visible to the camera are clipped. In addition, back-face culling may be performed to avoid rasterizing triangles that are facing away from the camera. Furthermore, attribute evaluation is performed to configure and select the actual algorithms that will be used during the rasterization stage. Finally, rasterization is performed to actually render the pixels.

In the pixel processing stage, you have the option of using fixed-function multi-texturing or programmable pixel shaders to determine the color value of a pixel. Fixed-function multi-texturing is exposed through a cascade of texture stages with each stage enabling a fixed set of operations to be performed on the color and alpha values of a pixel. Pixel shaders provide much more flexibility by exposing the operations performed on the color and alpha values through a custom assembly language. The algorithms implemented at the pixel processing stage include bump mapping, shadowing, environment mapping, and so on.

Frame buffer processing involves a set of memory regions known as the render surface, depth buffer, and the stencil buffer. During this stage, a series of calculations are performed to determine values such as depth, alpha, and stencil. The depth buffer is another rendering optimization used to remove hidden lines and surfaces. A depth test to determine which pixels are hidden and do not need to be rendered can be determined using either a z-buffer or a w-buffer, with each algorithm having its own pros and cons. Frame buffer processing enables a number of effects such as transparency, fog, and shadows.

One final point that needs to be stressed about the Direct3D pipeline is that its behavior can be modified through the graphics state. The graphics state is used to configure the many transformation, lighting, rasterization, pixel processing, and frame buffer processing algorithms provided by Direct3D for the purpose of rendering a frame. These states include the render, transform, sampler, and texture stage states.

Unmanaged Direct3D

The primary object that manages DirectX graphics is the Direct3D object. The Direct3D object is created through the Direct3DCreate9 function, which is the only global function exposed by the core Direct3D API. The Direct3D object implements the IDirect3D9 interface and is responsible for determining the details of a particular device. Besides device enumeration, the IDirect3D9 interface is responsible for creating a Direct3DDevice object through its CreateDevice factory method.

A Direct3DDevice object implements the IDirect3DDevice9 interface and serves as the primary workhorse of Direct3D graphics. The Direct3DDevice9 interface abstracts the entire pipeline illustrated in Figure 1. It provides methods exposing the numerous algorithms for transformation, lighting, rasterization, pixel, and frame buffer processing, supplied by a particular Direct3D-compliant hardware device. The methods in the IDirect3DDevice9 interface can be organized into the following categories: a set of properties that configures the Direct3D graphics pipeline or provides information about its current state, factory methods that create other objects in the Direct3D architecture, or a set of methods that execute the actual graphics algorithms.

The state of the Direct3D pipeline can be configured through a set of accessor methods on the IDirect3DDevice9 interface. The GetRenderState and SetRenderState methods retrieve and/or set the render state value for a device. The unmanaged D3DRENDERSTATETYPE enumerated type provides you with a set of options that can be used to configure the current render state. The GetTextureStageState and SetTextureStageState methods retrieve and/or set the state value for the currently assigned texture. The unmanaged D3DTEXTURESTAGESTATE enumerated type also provides the developer with a set of options that can be used to configure the individual texture stages. Additional attributes of the graphics state can be configured by selecting options from the unmanaged D3DTRANSFORMSTATETYPE and D3DSAMPLERSTATE enumerated types.

Geometry in unmanaged Direct3D is used through the VertexBuffer and IndexBuffer COM objects. These objects are created through factory methods on the IDirect3DDevice9 interface. A VertexBuffer object implements the IDirect3DVertexBuffer9 interface and stores an array of user-defined vertices. These vertices can be comprised of data needed for a particular application and can be declared using the Direct3D flexible vertex format. The Direct3D flexible vertex format is used through a set of macros. A fundamental limitation of flexible vertex formats is that they can only use a single stream of data and thus can be wasteful with memory. Flexible vertex formats are usually used in conjunction with the fixed-function pipeline. An IndexBuffer object implements the IDirect3DIndexBuffer9 interface and stores an array of indices into the VertexBuffer. An IndexBuffer provides an optimization to save memory by reusing shared vertices.

Direct3D Extensions

The unmanaged Direct3D extensions (D3DX) utility library provides a large body of interfaces, functions, macros, and support classes for a full range of 2D, 3D, and 4D vector math operations. Additionally, functions to support procedural textures, bump mapping, and environment mapping are provided. An important subset of Direct3D is its .X file format. Functions are provided for working with meshes, as well as animation and textures. A more recent feature of D3DX is its support for shaders and effect files. A runtime assembler is provided for shaders through the D3DXAssembleShader, D3DXAssembleShaderFromFile, and D3DXAssembleShaderFromResource API functions.

Managed Direct3D

Managed Direct3D has become an important part of the core Direct3D runtime, as opposed to an extension library like D3DX. Thus, an attempt was made to keep the managed abstractions as close as possible to the original unmanaged abstractions. To better understand managed Direct3D, it is useful to investigate how the managed layer abstracts unmanaged Direct3D. For the most part, managed Direct3D provides a one-to-one mapping of the interfaces, structures, and enumerations in the unmanaged layer to the classes, structures, and enumerations in the managed layer. To explicitly use some of the new features of managed code, such as class implementation inheritance, C# properties, indexers, and events, a few of the abstractions in the managed layer are exposed in a slightly different way than their unmanaged counterparts. In the next few sections, we will introduce the fundamental classes that are exposed by managed Direct3D.

The Device Class

Managed Direct3D provides a class-based abstraction of a device, which is implemented using the unmanaged IDirect3DDevice9 interface. A programmer writing managed Direct3D code uses methods of the Device class to access other Direct3D objects, to set the various graphics states, and to render the geometry. To perform these functions, the Device class provides many properties, methods, and events. The most important methods of the Device class, as called by 3D applications, are listed in Figure 2.

Figure 2 Device Class Methods

Method Description
Clear Clears the viewport in preparation for another frame of rendering
BeginScene Prepares the device to render a frame of primitives; BeginScene must be called before any primitive is rendered for the frame
EndScene Signals to the device that all the primitives have been rendered for a frame; EndScene must be called after all the primitives are rendered for the frame
DrawPrimitives Renders a primitive
DrawIndexedPrimitives Displays the buffer that was
Present rendered into and prepares the next buffer for rendering; Present is called after EndScene and before the next BeginScene (for the next frame)
SetStreamSource Binds a vertex buffer to a device data stream; vertex shaders may use vertex data coming from different streams; for example, vertex positions may be in one stream, normals in another
GetVertexShaderConstant, SetVertexShaderConstant, GetPixelShaderConstant, SetPixelShaderConstant Gets/sets the constant values that are accessible by vertex/pixel shaders
GetTexture, SetTexture Gets/sets the texture associated with a given texture stage
GetTransform, SetTransform Gets/sets the world, view, projection or other transform; transforms are applied to vertex positions and normals, and/or to texture coordinates

With respect to object activation, one view of why unmanaged Direct3D uses factory methods is simply that COM does not support a new operator. The standard activation pattern for COM is factory-based construction. The CoCreateInstance function simulates a new operator for COM, but it is just a wrapper that internally uses a COM factory. Another rationale for using factories as opposed to a new operator is that when a new operator is used, you bind a particular implementation of an object into the client's code; thus extensive use of the new operator can lead to versioning problems. Factory methods are often used to decouple the client or user of the code from the creator of the code. The unmanaged COM IDirect3DDevice9 interface provides 14 explicit factory methods to create the objects that are required when using Direct3D, such as pixel and vertex shaders, and vertex and index buffers.

Currently, managed Direct3D relies heavily on class construction using the new operator. Although the managed Device class only provides three factory methods, it is interesting to note the one-to-one correlation between the unmanaged IDirect3DDevice9 factory methods and the public properties exposed by the Device class. The following lines of code illustrate the three factory methods which are exposed by the managed Device class, each taking multiple arguments:

public Surface CreateRenderTarget(...); public Surface CreateDepthStencilSurface(...); public Surface CreateOffscreenPlainSurface(...);

Method overloading is used heavily in managed Direct3D. The next code example illustrates the unmanaged Device.ColorFill method and its managed overloaded wrappers:

typedef DWORD D3DCOLOR; // Methods from the unmanaged IDirect3DDevice9 interface. HRESULT ColorFill(IDirect3DSurface9 *pSurface, CONST RECT *pRect, D3DCOLOR color); // Methods from the managed Device class. public void ColorFill(Surface surface, Rectangle rect, Color color); public void ColorFill(Surface surface, Rectangle rect, int color);

Unmanaged Direct3D, like traditional graphics systems, requires color values to be passed internally as 32-bit integer values. On the other hand, the Microsoft .NET Framework explicitly provides the System.Drawing.Color class to deal with ARGB color values and contains several methods to convert between representations. As a convenience to programmers who have been accustomed to using integer values to represent color, managed Direct3D provides method overloads using both representations wherever a color argument is required.

The unmanaged IDirect3DDevice9 interface exposes both read-only and read-write properties, both with and without arguments. In the current version of the .NET Framework, properties do not take arguments. Therefore, when mapping from unmanaged Direct3D to managed Direct3D, properties that do not take arguments map over nicely to C# properties, whereas unmanaged Direct3D properties that take arguments are simply exposed as Get and Set methods. Figure 3 lists some of the more important properties of the Device class.

Figure 3 Device Class Properties

Property Description
DeviceCaps Gets a struct representing the capabilities of the hardware; this is the property to query when determining whether the hardware supports the particular features that an application may require
Indices Gets/sets the index buffer to use for rendering primitives
Material Gets/sets the material to use in rendering
Lights Gets the collection of lights that can be activated for rendering
RenderState Gets the collection of render states that are used to control the different stages of the Direct3D pipeline
TextureState Typically used with an indexer, gets the texture state for the indexed texture stage; the texture state controls how a texture is applied
SamplerStates Typically used with an indexer, gets the sampler state for the indexed texture stage; the sampler state controls how a texture is sampled
VertexDeclaration Gets/sets a description of the vertex format being used with a vertex shader
VertexFormat Gets/sets a description of the vertex format being used with the fixed-function pipeline
Viewport Gets/sets the rectangular region on the device on which to render
VertexShader, PixelShader Gets/sets the vertex/pixel shader to use for rendering

Finally, the managed Direct3D Device class provides five explicit events for interacting with a device. A Direct3D-based application can source events to a Device by starting an application, resizing a window, changing from windowed to full screen mode, or exiting the application. It is interesting to see the .NET delegate-based events pattern used by Direct3D. For each event declaration on the Device class, add, remove, and raise methods are provided as well as a private instance variable storing the event handler. In C#, syntactic sugar for client-side event registration and unregistration is provided through the += and -= operators, respectively (see Figure 4).

Figure 4 Device Event Handling

public class Device : MarshalByRefObject, IDisposable { // Lots of additional Properties, Methods, and Events. private EventHandler DeviceCreated; public void add_DeviceCreated(EventHandler eh); public void remove_DeviceCreated(EventHandler eh); protected void raise_DeviceCreated(object i1, EventArgs i2); public event EventHandler DeviceCreated; // Similar pattern for the DeviceLost, DeviceReset, DeviceResizing, // and Disposing events. } // Client code. PresentParameters presPar = new PresentParameters(); device = new Device(0, DeviceType.Hardware, this, CreateFlags.SoftwareVertexProcessing, presPar); device.DeviceCreated += new System.EventHandler(this.OnCreateDevice); this.OnCreateDevice(device, null);

Graphics State Classes

As discussed earlier, the graphics state for unmanaged code is configured through methods on the IDirect3DDevice9 interface. For example, in unmanaged code, the transform state can be directly accessed through the GetTransform and SetTransform methods, both of which take an enumeration value of type D3DTRANSFORMTYPE.

// From unmanaged IDirect3DDevice9 interface: HRESULT GetTransform(D3DTRANSFORMSTATETYPE State, D3DMATRIX *pMatrix); HRESULT SetTransform(D3DTRANSFORMSTATETYPE State, CONST D3DMATRIX *pMatrix);

Managed Direct3D provides some abstraction over the unmanaged layer, making the client-side code a bit cleaner. The Device class shown here illustrates a set of read-only properties:

public class Device : MarshalByRefObject, IDisposable { // Additional properties, methods and events. public Transforms Transform { get; } public RenderStates RenderState { get; } public SamplerStates SamplerState { get; } public TextureStates TextureState { get; } }

The properties return instances of Transforms, RenderStates, SamplerStates, and TextureStates classes. These instances can then be used to configure the graphics state for managed Direct3D. For example, the Device class does not directly expose a Transform property taking an enumeration value. The managed Device class provides a property returning a Transforms object. The Transforms object then exposes a set of properties to access the current world, view, and projection matrices, which themselves are elements from the managed TransformType enumeration.

The RenderStates class provides a large number of properties used to configure the particular algorithms selected for the rasterization stage. The following client-side code shows how the state of the managed Direct3D pipeline is set.

// cull, spec, dither states. device.RenderState.CullMode = Cull.None; device.RenderState.SpecularEnable = false; device.RenderState.DitherEnable = false; // Filter states. device.SamplerState[0].MagFilter = TextureFilter.Linear; device.SamplerState[0].MinFilter = TextureFilter.Linear;

Notice the use of indexers to access the stages of the multi-texture cascade for the sampler states.

The sampler state specifies the filtering, tiling, and texture-addressing modes used for a particular stage and are exposed through properties on the Sampler class. Filtering is used to specify how an image is mapped to a particular Direct3D primitive. Depending on the size of the image (texture) and the size of the on-screen primitive it needs to be mapped to, the image may need to be magnified or reduced. Direct3D provides three filtering methods: MipFilter, MinFilter, and MagFilter, each with modes that can be configured by the TextureFilter enumeration. Tiling is used to assemble a set of individual textures into a single texture and reduces the need to switch textures. Tiling is supported through a set of properties on the Sampler class. Finally, the texture-addressing mode defines the action for texture coordinates outside the [0,1] boundary through elements of the TextureAddress enumeration:

public class SamplerStates { private Sampler[] m_lpTex; private bool pureDevice; public Sampler get_SamplerState(int index); internal SamplerStates(Device dev, bool pure); public Sampler this[int index] { get; } }

The TextureStates and SamplerStates classes provide a read-only indexer, which returns a TextureState instance for a particular stage. The TextureState class defines properties, which can get or specify the elements of a particular texture stage state. Properties such as ColorArgumentn or AlphaArgumentn (where n can be 0, 1, or 2) can be elements from the TextureArgument enumeration, while ColorOperation or AlphaOperation can be of a TextureOperation enumeration, as shown here:

// color = tex mod diffuse. device.TextureState[0].ColorArgument1 = TextureArgument.TextureColor; device.TextureState[0].ColorOperation = TextureOperation.Modulate; device.TextureState[0].ColorArgument2 = TextureArgument.Diffuse; // alpha = select texture alpha. device.TextureState[0].AlphaArgument1 = TextureArgument.TextureColor; device.TextureState[0].AlphaOperation = TextureOperation.SelectArg1; device.TextureState[0].AlphaArgument1 = TextureArgument.Diffuse;

Geometry-related Classes

Geometry in Direct3D is defined as an array of vertex definitions. It is important to understand that different graphics algorithms require different information stored at each vertex. These vertex components need to be declared by the application depending on the effect that the programmer wants to implement. For example, if the programmer is interested in mapping a texture to geometry that Direct3D will render, then the programmer has to declare that at each vertex there will be tu and tv values (texture coordinates). Managed Direct3D provides two mechanisms for vertex definition: VertexFormats and VertexDeclarations. These mechanisms can be used to define additional data at each vertex such as colors, blending weights, and texture coordinates or any application-defined usage. As a convenience, the CustomVertex helper class is provided for commonly used vertex format definitions. The CustomVertex class provides 11 different vertex definitions most often used in Direct3D applications.

In managed Direct3D, the vertex format used by the application to define the layout of its vertices is accessed by the Device class' read and write VertexFormat property. The VertexFormat property accesses an element of the VertexFormats enumeration and is implemented simply by calling the internal unmanaged Direct3D Device COM object using the GetFVF and SetFVF methods on the IDirect3DDevice9 interface. The managed VertexFormats enumeration maps to the constant VertexFormat bits defined in d3dtypes.h, as shown here:

public struct Vertex { public Vector3 position; public float tu, tv; public static readonly VertexFormats Format = VertexFormats.Position | VertexFormats.Texture1; }; // From a typical managed Direct3D applications Render() method. device.VertexFormat = Vertex.Format;

When using the programmable pipeline, custom vertex declarations are required. Unlike a vertex buffer defined using a flexible vertex format, the custom vertex declarations can be defined using multiple streams, thus saving bandwidth:

VertexElement[] declaration = new VertexElement[] { new VertexElement( 0, // short stream 0, // short offset DeclarationType.Float2, // DeclarationType declType DeclarationMethod.Default, // DeclarationMethod declMethod DeclarationUsage.Position, // DeclarationUsage declUsage 0 // byte usageIndex ), VertexElement.VertexDeclarationEnd }; VertexDeclaration declaration = new VertexDeclaration(device, declaration);

Geometry in managed Direct3D is usually created by handling the VertexBuffer.Created event. Figure 5 shows the geometry of a simple triangle taken from the Tutorial2 sample in the SDK and illustrates the use of a predefined custom vertex, CustomVertex.TransformedColored.

Figure 5 Initializing a Vertex Buffer

public void OnCreateDevice(object sender, EventArgs e) { // Additional code ... vertexBuffer.Created += new System.EventHandler(this.OnCreateVertexBuffer); this.OnCreateVertexBuffer(vertexBuffer, null); } public void OnCreateVertexBuffer(object sender, EventArgs e) { VertexBuffer vertexBuffer = (VertexBuffer)sender; GraphicsStream stream = vertexBuffer.Lock(0, 0, 0); CustomVertex.TransformedColored[] vertices = new CustomVertex.TransformedColored[3]; vertices[0].X = 150; vertices[0].Y = 50; vertices[0].Z = 0.5f; vertices[0].Rhw = 1; vertices[0].Color = System.Drawing.Color.Aqua.ToArgb(); vertices[1].X = 250; vertices[1].Y = 250; vertices[1].Z = 0.5f; vertices[1].Rhw = 1; vertices[1].Color = System.Drawing.Color.Brown.ToArgb(); vertices[2].X = 50; vertices[2].Y = 250; vertices[2].Z = 0.5f; vertices[2].Rhw = 1; vertices[2].Color = System.Drawing.Color.LightPink.ToArgb(); stream.Write(vertices); vertexBuffer.Unlock(); }

Texture Classes

The texture-related classes in managed Direct3D inherit from the abstract class Resource and for the most part layer over the unmanaged texture-related interfaces. Managed Direct3D has three texture classes: Texture, CubeTexture, and VolumeTexture. The Texture class represents an image that can be mapped onto a primitive for rendering or used for bump mapping, normal mapping, or other effects. The fundamental function of a texture is to map a pair of texture coordinates into a value, which is usually a color value, but may be interpreted differently. The texture coordinates are normally part of the vertex data. The mapping from texture coordinates to color or other values is done during the pixel processing stage of the rendering pipeline. Direct3D allows up to eight textures to be used at a time. The CubeTexture class is a special kind of texture that's used for cubic environment mapping, a technique which allows an environment such as a room or exterior scene to appear reflected by shiny objects and which is implemented by using a texture to define an image for each of the six faces of a cube surrounding and reflected by the rendered objects. The VolumeTexture class represents color or other values that exist in a three-dimensional rectangular space. As such, it maps a triplet of texture coordinates into color or other values. The most important aspect of these classes is that they all support a set of overloaded methods for locking and unlocking textures.

A commonly used class for loading and saving textures is TextureLoader. The TextureLoader class contains methods for loading a texture, cube texture, or volume texture from a file or stream. Additional uses of the TextureLoader class include filling and filtering textures and computing normal maps. A useful helper method for creating textures is provided by the GraphicsUtility class: the file GraphicsUtility.CreateTexture method simply finds the path of a specified texture and then calls the TextureLoader.FromFile method to load it. The following code illustrates the usage of a GraphicsUtility class:

Texture texture = GraphicsUtility.CreateTexture(device,"banana.bmp");

Managed D3DX Library

The managed D3DX utility library has classes to handle meshes, textures, shaders, effects, text, and animation. Unlike the unmanaged D3DX library, however, it does not have math classes such as matrix, vector, and quaternion classes. In managed DirectX, the math classes are part of the DirectX library.

The mesh-handling classes are Mesh, SimplificationMesh, ProgressiveMesh, SkinMesh, and PatchMesh. Mesh and ProgressiveMesh inherit from the BaseMesh class. BaseMesh manages the mesh representation in memory (including vertex and index buffers), and provides a DrawSubset method to render the mesh. Other methods include a Clone method to copy the mesh, and methods to compute vertex normals and get adjacency information.

The Mesh class extends BaseMesh with methods to load a mesh from and save to an .X file (a standard DirectX file type) or binary stream. Methods for generating a mesh representing an n-sided polygon, cylinder, sphere, torus, 3D text in a given font, or even a teapot are also provided. The user may tessellate the mesh to get a smoother object by interpreting each triangle as an N-patch, simplify the mesh (by reducing the number of triangles), optimize the mesh (by reordering the triangles in order to exploit vertex caching in hardware and improve performance), or perform ray intersection with the mesh.

A SimplificationMesh object is constructed from a Mesh and has methods to reduce the number of faces (triangles) or vertices, thereby resulting in a simpler although lower-quality version of the original mesh. A ProgressiveMesh object is constructed from a Mesh. It allows the mesh to be simplified in real time to minimize the number of rendered triangles as the distance between the mesh object and the camera increases. The SkinMesh class (meshes animated by skeletons) provides methods to be used for loading a skin mesh from an .X file data object. Properties are included to expose the underlying mesh and skin information, such as the influence of each bone on the structure.

The PatchMesh class represents a higher-order surface such as an N-Patch, Triangle Patch, or Rectangle Patch surface. PatchMesh includes methods to create an N-Patch mesh from a Mesh object, load a PatchMesh from an .X file, tesselate the PatchMesh, and so on. The texture-handling classes consist mainly of the TextureLoader class described earlier, a RenderToEnvironmentMap class, and a RenderToSurface class. The RenderToEnvironmentMap and RenderToSurface classes are used to create cubic environment maps and textures at run time. Rendering into textures at run time enables you to get special effects such as mirror-like objects reflecting other objects in the scene or creating dual-pane rendering. The text-handling classes consist mainly of the Font class, which can be instantiated from a System.Drawing.Font object and which has methods to render the text to a Direct3D-supporting device. The animation-handling classes consist of the KeyFrameInterpolator class, which has methods that are used to interpolate an object's scale, rotation, and/or translation. The interpolation is performed by using a set of 3D vector (for scaling and translation) and quaternion keys (for rotation).

The Managed DirectX 9.0 Samples Framework

The DirectX 9.0 samples framework provides numerous classes that accelerate graphics application development by implementing commonly performed tasks. The samples framework is available in source code form. To use it, you simply add the required source files to the graphics application project.

When an application is created by the Managed DirectX Application wizard (in Microsoft Visual C#® .NET), code is generated based on the samples framework. Using the wizard to produce a graphics application is the quickest way to begin using the samples framework. The samples framework is used by the samples that come with the DirectX 9.0 SDK. The DirectX 9.0 samples show how to exploit the samples framework in addition to Direct3D.

Even when not using the samples framework directly, the code in the samples framework can be referenced to see how to drive DirectX 9.0 code in a flexible and robust manner.

The GraphicsSample Class

C# applications using the DirectX samples framework inherit the GraphicsSample class, which has code to do the following:

  • Initialize and terminate DirectX.
  • Find the best windowed or full-screen display mode to use.
  • Display a form allowing the user to change device settings (such as the display mode).
  • Handle window operations such as moving, resizing, minimizing, and maximizing.
  • Handle window activation and deactivation as the window gains and loses focus.
  • Handle user interface events such as key presses and main menu navigation events.
  • Perform frame updating and rendering.
  • Perform primitive animation in the form of pausing and single-stepping the animation.
  • Collect and display statistics.
  • Handle exceptions.

The GraphicsSample class itself inherits from System.Windows.Forms.Form, so the graphics application is ultimately a Windows Form.

The GraphicsSample class defines several methods that subclasses can override to change the default behavior or add functionality. It also has a number of fields and properties that can be used to control the behavior of the graphics application and access the device that supports DirectX Graphics.

The minimum code required to use the GraphicsSample class is shown in Figure 6 and extracted from the DirectX 9.0 samples. The Render method in Figure 6 only clears the display surface so this code will just display a black background. The wizard actually adds code to display a teapot on a blue background if the developer so chooses, but that has been omitted for brevity. In normal situations, you would replace the teapot-rendering code with rendering code specific to your application.

Figure 6 Minimal App Using GraphicsSample

using System.Drawing; using Microsoft.DirectX; using Microsoft.DirectX.Direct3D; public class MyGraphicsSample : GraphicsSample { protected override void Render() { device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); } static void Main() { using (MyGraphicsSample d3dApp = new MyGraphicsSample()) { if (d3dApp.CreateGraphicsSample()) { d3dApp.Run(); } } } }

The DirectX 9.0 C# wizard actually creates a separate class in which it puts the Main method, so the generated code differs from that shown in Figure 6. For the code in Figure 6 to build, it is necessary to add the D3DApp.cs file to the project so that the GraphicsSample class is available.

Initialization and Cleanup

The initialization- or cleanup-specific overridable methods are listed in Figure 7. They are used in a number of scenarios: application startup, application termination, device change, device loss, and reset. The sequences of calls that result in each of these scenarios are detailed in Figure 8.

Figure 8 Initialization and Cleanup Scenarios

Event Function Calls
Application startup OneTimeSceneInitialization
ConfirmDevice (one or more)
InitializeDeviceObjects
RestoreDeviceObjects
Application termination InvalidateDeviceObjects
DeleteDeviceObjects
Device change InvalidateDeviceObjects
DeleteDeviceObjects
InitializeDeviceObjects
RestoreDeviceObjects
Device loss and reset InvalidateDeviceObjects
RestoreDeviceObjects

Figure 7 Initialization and Cleanup Methods

Method Description
OneTimeSceneInitialization Performs initialization that is required only when the application starts
ConfirmDevice Checks whether a Direct3D-targeted device has the capabilities required by the application
InitializeDeviceObjects Called when the device is initialized or has changed; initializes any resources that are not lost when the device is reset, such as PoolManaged, PoolScratch and PoolSystemMemory resources, texture resources, vertex and pixel shaders
RestoreDeviceObjects Handles the Direct3D-targeted device's DeviceReset event; recreates all PoolDefault resources and reinitializes any device states that do not change during rendering
InvalidateDeviceObjects Handles the Direct3D-targeted device's DeviceLost event; releases all PoolDefault resources or any objects that were created in RestoreDeviceObjects
DeleteDeviceObjects Handles the Direct3D-targeted device's Disposing event; destroys any objects that were created in InitializeDeviceObjects

In addition to the methods that you saw in Figure 7, the GraphicsSample class has a RenderTarget property that derived classes can set before CreateGraphicsSample is called to change the render target. By default, the render target is the Windows Form itself. Derived classes can define a separate control, or window, to act as the render target.

Frame Updating and Rendering

The GraphicsSample class calls the FrameUpdate method that derived classes can override to update the scene at each frame. The FrameUpdate method may get input from a mouse, joystick, or other input device and update the camera or other scene objects accordingly. The FrameUpdate method is where you would normally put any animation-related update code such as simulation, collision detection, or game-related artificial intelligence.

The GraphicsSample class calls the Render method that any derived classes must override to render the scene. The Render method is responsible for clearing the display surface area and rendering the scene between the calls to the device's BeginScene and EndScene methods.

Figure 9 shows the Render method implementation generated by the Managed DirectX Application Wizard to render a teapot, slightly reformatted for clarity. In this simple example, the Render method sets up the lighting as well as the world, view, and projection matrices. Since those render states do not change between frames, they could just as well be set in the RestoreDeviceObjects method, which would be more efficient. Setting these unchanging render states in RestoreDeviceObjects would set them just once when the device is initialized, instead of during every frame. The code shown in Figure 9 requires another file, DXUtil.cs, found in the samples framework.

Figure 9 Rendering a Teapot

using System.Drawing; using Microsoft.DirectX; using Microsoft.DirectX.Direct3D; public class GraphicsClass : GraphicsSample { private float x = 0.0f; private float y = 0.0f; private Point destination = new Point(0, 0); private Mesh teapot = null; // Additional code ... protected override void Render() { // Clear the backbuffer to a Blue color. device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Blue, 1.0f, 0); // Begin the scene. device.BeginScene(); device.Lights[0].Enabled = true; // Setup the world, view, and projection matrices. Matrix m = new Matrix(); if( destination.Y != 0 ) y += DXUtil.Timer(DirectXTimer.GetElapsedTime) * (destination.Y * 25); if( destination.X != 0 ) x += DXUtil.Timer(DirectXTimer.GetElapsedTime) * (destination.X * 25); m = Matrix.RotationY(y); m *= Matrix.RotationX(x); device.Transform.World = m; device.Transform.View = Matrix.LookAtLH( new Vector3( 0.0f, 3.0f,-5.0f ), new Vector3( 0.0f, 0.0f, 0.0f ), new Vector3( 0.0f, 1.0f, 0.0f ) ); device.Transform.Projection = Matrix.PerspectiveFovLH( (float)Math.PI / 4, 1.0f, 1.0f, 100.0f ); // Render the teapot. teapot.DrawSubset(0); // End the scene. device.EndScene(); } }

Conclusion

Managed Direct3D, part of the DirectX Graphics component of DirectX 9.0, provides a great opportunity for developers targeting the .NET Framework to add 3D capability to their applications. Not only is managed Direct3D nearly as efficient as its unmanaged counterpart, but when used in conjunction with the high-level shading language, it provides a great future for high-performance RAD game development.

For related articles see:
Using the Effects Framework
https://www.realtimerendering.com

For background information see:
Real-Time Rendering, 2nd Edition, by Tomas Akenine-Moller and Eric Haines (AK Peters, 2002), Real-Time Shading, by Marc Olano, John C. Hart, Wolfgang Heidrich, and Michael McCool (AK Peters, 2002)
The DirectX 9 Programmable Graphics Pipeline, to be published by Microsoft Press, July 2003

Yahya H. Mirza is a principle at Aurora Borealis Software. He has been working on projects using the .NET Framework at Microsoft and Source Dynamics since 2000. He is working with his company on a graphics GPU project at Pixar. He can be reached at yahya_mirza@hotmail.com.

Henry da Costa has been programming in the computer graphics field for the past 18 years. He has implemented a game engine for a start-up venture and several SDKs for Softimage products. You can reach Henry at henrydacosta@sympatico.ca.