Tablet 101 Column 3: Strokes and Recognition

Frank Gocinski
Microsoft Corporation

December 2004

Applies to:
   Microsoft Windows XP Tablet PC Edition
   Microsoft Windows XP Tablet PC Edition Development Kit 1.7

Summary: This article is an introductory look at the object model used for ink collection and recognition. We examine the mechanism for collecting ink, as well as the logical location and identification of ink strokes. In addition, the article details ink storage and persistence, as well as recognition of handwriting and gestures. The accompanying sample, written in C#, contains a number of example projects in the solution. (21 printed pages)

Click here to download the code sample for this article.

Contents

Introduction
The Ink Object
The Tablet Digitizer
Ink Data
Stroke Data
Rendering Ink
Ink Recognition
Handwriting Recognition
Conclusion

Introduction

Welcome to our third column. In this edition we're going to cut across the breadth of Tablet PC APIs, focusing on some of the more common ones to build on the core concepts. We'll touch on how Ink is collected, stored, manipulated, and eventually recognized.

In our previous columns we introduced the InkCollector and InkOverlay objects, which provide real-time inking capabilities for your applications. They make it easy to integrate with the Tablet PC Platform. We also discussed context awareness and how you can suggest to the recognizer what the most probable type of data the result will be. This is part of our set of Tablet APIs that are powerful, easy to understand, and easy to use.

In this column we'll continue to drill into the basic concepts around integrating with the Tablet API, we'll take a broad look at the object model and services provided, and we'll write some code to show off the basic techniques you'll need as you go along. In future articles we'll examine the SDK in more depth to round out your overall skill set. My goal is to continue to show you coding techniques along with the samples, which will give you new tools and techniques after you read each of these columns.

Let's begin with some discussion of the most important classes in our object model, the Ink Class. In most cases you'll be using the InkOverlay class to work with Ink in your applications, but the InkOverlay, as well as the InkCollector, both handle instantiation of the Ink object for you.

Figure 1. Object Model for ink

InkCollector: This is the fundamental object that is used to collect and render ink as it is being entered by the user. The InkCollector fires events to your application as they happen, and also packages up Cursor movements into ink strokes for you. You can tell the InkCollector what events and what data you are interested in receiving, as well as whether you want it to paint ongoing ink strokes as they are collected. InkCollector was intentionally designed without all the support that the InkOverlay brings to the picture, as it's a baseline for building component technology in which you want behavior different than the InkOverlay.

InkOverlay: The InkOverlay object is a superset of the InkCollector and adds selection and erasing of the ink that's been captured. For most all applications the InkOverlay is the object of choice because its support for selection-related features is a real time-saver and typically a requirement of any application. To keep things simple, this is the object I use in Figure 1.

The InkCollector and InkOverlay objects fire numerous events. Wiring these events enables you to respond to all of the important stages of ink input or pen movement.

  • The CursorInRange/CursorOutOfRange events specify that the user has moved the cursor in range of an ink-enabled area. Note that on some devices the cursor does not have to be touching the screen for these events to fire.
  • Following the CursorInRange event, if the cursor is not touching the digitizer, the NewInAirPackets event is raised.
  • When the cursor touches the digitizer, the CursorDown event fires.
  • The CursorDown event is followed by the NewPackets event which indicates that ink is being collected.
  • When the CursorOutOfRange event fires, you receive a Stroke or Gesture event, depending on the value of the InkCollector object's CollectionMode property. The same is true for the InkOverlay object.
  • Your application will often receive various SystemGesture and Mouse events. By default the Tablet PC Platform supports numerous system gestures, many of which mimic traditional mouse events. For example, tap events map to mouse clicks, and drag, hold, and hover events map to the same type of Mouse events. For more information about system gestures, see System Gestures in the Tablet PC SDK.

The Ink Object

The Ink object is the top level entry point into working with Ink; think of it as a Document type of container. Once Ink has been collected by using an InkOverlay (or InkCollector—I will stop referring to both and focus on the InkOverlay) object, it is maintained in an Ink object. An Ink object is a container of Stroke objects. Each Stroke object represents all of the data collected each time the tablet pen, represented by the Cursor touches the digitizer. This data is referred to as packet data, which holds the (x, y) coordinates and additional attributes (pen pressure, color, width) that describes the pen movement of the user.

When working with stroke data within an Ink object, you typically use a Strokes collection to reference the strokes you are interested in. Before the Ink data can be collected, there needs to be a physical device that can collect the user actions with the pen. This is the job of the digitizer and the HID driver that is connected to it.

The Tablet Digitizer

The Tablet PC differs from a traditional computer primarily because of the digitizer and its ability to collect data at a far more granular level than, for example, a mouse. There are two types of digitizers that are available today. Passive digitizers (also know as a resistive digitizer or touch screen) is typically used on devices like PDAs. These digitizers have minimal resolution and data throughput. They can roughly take 40 samples per second (similar to a mouse) and resolve location to within a quarter millimeter (around a pixel level of resolution).

Figure 2. Anatomy of a digitizer

To qualify as a Tablet PC, however, there must be an Active Digitizer integrated with the device. Active digitizers generate an electromagnetic field around the screen and monitor disturbances in that field. These disturbances are created by the stylus (Pen), which typically has a ferrite core magnet wrapped in a coil. No battery is necessary. Active digitizers can sample the stylus location at well over 100 samples per second, which delivers the type of granularity necessary for great recognition.

Ok, so what does this look like in code?

First, I add an InkOverlay to my form.

private InkOverlay m_InkOverlay;

Then in the Form1() constructor I'll instantiate the InkOverlay and enable it.

m_InkOverlay = new InkOverlay(Handle);
m_InkOverlay.Enabled = true;

With an active InkOverlay on the form I can start collecting Ink from the user.

Ink Data

On the Microsoft Windows XP Tablet PC Edition operating system, as well as future versions of Microsoft Windows, Ink is a first-class datatype. Ink is gathered from the digitizer and behaves, in a sense, just like mouse data. The Ink class is automatically created when an InkCollector or InkOverlay object is created. As mentioned earlier, an Ink object is a container of stroke (x, y) data. The stroke data, or the points collected by the pen, are put into an Ink object. The Strokes property contains the data for all strokes within the Ink object.

The InkCollector object, InkOverlay object, InkPicture control, and InkEdit control collect data from the digitizer in the form of Ink (or packets) and put them into an Ink object. These objects essentially act as the source that distributes ink into one or many different Ink objects, which act as containers that hold the distributed ink.

The ink space is fixed to a HIMETRIC coordinate system. In ink space coordinates, a move from 0 to 1 equals 1 HIMETRIC unit. This mapping makes it easy to relate multiple Ink objects. This is always a confusing area for me, so as always I consult the Jarrett/Su book Building Tablet PC Applications) for a more detailed explanation. "All Ink objects and Stroke objects use the HIMETRIC coordinate system. A HIMETRIC unit represents 0.01mm where the measurement is derived from the screen's current DPI. The coordinate space is the usual Microsoft Windows style fourth quarter quadrant in which the origin (0,0) represents the upper left corner of the space and the x and y coordinates imply locations to the right and down respectively. The Stroke objects contained in the Ink object all share the coordinate space, so each Stroke objects packet's x,y values are based from a common (0,0) origin." Understood? HIMETRIC is a metric mapping mode in which each unit is equal to .01 mm.

Let's see this in action by running the InkData sample included with this column. I've written a simple application that displays the Ink coordinates for the ink collected as you run the pen over the form. Note in figure 3 that I use two points: one at the upper-left and one at the lower-right.

Figure 3. InkData sample.

You can just as easily ink complete strokes over this form and get the data for all the points in the stroke. Here's the code to display the Ink coordinates in the text box of this sample.

   // Query the Ink object to get the count of Stokes 
   // and iterate through.
   foreach (Stroke s in m_InkOverlay.Ink.Strokes) 
   {
      // Now for each Stoke get the count of points
      // and iterate through.
      Point[] points = s.GetPoints();
      foreach (Point point in points) 
      {
         // Place the point values into the text box.
         text += point.X + "," + point.Y + "\r\n";
      }
   }
   textBox1.Text = text;
   

Stroke Data

A Stroke object consists of packet information (x, y,...) that represents the Ink at a certain point on the coordinate system, which we now know starts 0,0 at the upper-left corner. Each Stroke object is assigned a unique ID (Stroke.ID) relative to the Ink object that it resides in. It allows the developer to define extended properties for custom data and exposes DrawingAttributes that affect the rendering of the data.

The Strokes collection is a collection of references to Stroke objects. A Strokes collection provides helpful operations common to the collection of Stroke objects it contains, such as Rotate() and Scale(), which are used for filtering, moving, or resizing the Stroke data.

Let's examine some Stroke information. I am using the StrokeIDViewer application that ships as part of the Building Tablet PC Applications book; I've rewritten part of that application for inclusion in this article and the download that accompanies it. In my sample source it's called StrokeViewer.

Figure 4. Stroke count for my name in cursive and print.

Notice the difference in the number of strokes when I write my first name in script versus printed format. I press down, write, and pull up four times in script and nine when I print. Wow, that's a lot of extra work. So a stroke contains the packet data that represents the ink from when I touch the digitizer to when I pull away. Strokes are basically a Bezier curve that represents the ink collected between a CursorDown and CursorUp event.

There are several things going on here.

  1. A Stroke object is created with each pen down/pen moves/pen up action.
  2. A Stroke contains all of the packet data (including things like pressure).
  3. A Stroke's visual representation is its Bezier by default (but this can be changed in the drawing attributes).

Let's work with the StrokeViewer sample (which is included in the download that accompanies this article) some more to see what's happening when we add, select and delete strokes. We'll add three strokes to the window and will display their stroke ID's.

Figure 5. I create three strokes.

I created three strokes and you can see they are labeled 1,2,3, which in this case are the Stroke ID's. Now, I'll change the mode of the application to "delete" and I'll remove some of the strokes.

Figure 6. I erase one stroke.

When I add another stoke you would expect to see ID 4, correct? Well, you won't.

Figure 7. What happened to Stroke ID 4?

As you can see in the previous image, the new Stroke has an ID of 5. So, what happened to 4? Each pen action I performed in delete mode causes a new Stroke to be created; hence the IDs incremented. This is interesting because no ink was actually drawn. Let's try this out with select mode and you'll see a similar progression of the stroke IDs.

Figure 8. Experimenting with the selection's affect on a stroke.

Switch to select mode, select a Stroke, and then go back to ink mode and add a new Stroke.

Figure 9. Sequence of stroke ID jumps.

Now I am back in ink mode and I draw another stroke, which is labeled with an ID of 7. ID 6 was used for the selection process.

Finally, when we go into point-erase mode and run the eraser across all the strokes, splitting each one into two, you can see the same phenomena, too. The action of deleting the strokes actually wound up creating one to represent what was removed.

Figure 10. The effect of erasing on Stroke ID.

Notice the error in my code; when I went from select mode back to ink mode I did not set the attribute of the selected stroke, so it's still showing up as selected. Yes, I could have fixed this, but I figured it would be a good challenge for you, our readers, to figure out the fix. As a matter of fact, the first 100 folks who send me the fix to my sources resolving this problem will receive a copy of Building Tablet PC Applications, by Jarrett and Su. (Qualification—send me your source file(s) with the changes pointed out; I don't want your entire project, just files that need to change.) Send your answers to tabdvctr@microsoft.com, with a subject line of "I fixed your source—Tab101 Column 3." If it works and you're one of the first 100 folks, I'll send you the book.

Let's take a look at the code I wrote to build the StrokeViewer; it's pretty straightforward.

First, in my C# Windows Call (remember to add a reference to Microsoft Windows XP Tablet PC Edition Development Kit 1.7 and to add the Microsoft.Ink namespace)

public Form1()
{
   //
   // Required for Windows Form Designer support.
   //
   InitializeComponent();
   // Instantiate the InkOverlay. 
   m_InkOverlay = new Microsoft.Ink.InkOverlay(this.Handle);
   m_InkOverlay.Enabled = true;

   // Create the event handler to respond to a StrokeAdded event.
   m_InkOverlay.Stroke += new InkCollectorStrokeEventHandler( 
InkStrokeAdded );

   // Hook up to the InkOverlay's event handlers.
   m_InkOverlay.Painted += new 
InkOverlayPaintedEventHandler(InkPainted);
}

The InkStrokeAdded event handler is called when a Stroke is added; what I do is invalidate the form in order to force a repaint.

// Delegate called when a Stroke is added.
private void InkStrokeAdded( object sender, InkCollectorStrokeEventArgs 
e)
{
// Invalidate the form so we can force a repaint.
   this.Invalidate();
}

The InkPainted handler is called when the WM_Paint message is sent to my form; because I want ownership of this process, I intercept the call and make a subsequent call into a helper function that does the real work.

// Delegate to respond to Paint request.
private void InkPainted( object sender, PaintEventArgs e)
{
// Call the helper function that redraws the form.
RendererEx.DrawStrokeIds(e.Graphics, Font, m_InkOverlay.Ink);
}

As mentioned, the real work is done in the following method, which is called to render the Ink and the Stroke.ID.

// Draw the Stroke IDs for a Strokes collection.
public static void DrawStrokeIds(
   Renderer renderer, Graphics g, Font font, Strokes strokes)
{
// Iterate through every Stroke referenced by the collection.
   foreach (Stroke s in strokes)
   {
      // Make sure each Stroke has not been deleted.
      if (!s.Deleted)
      {
         // Draw the Stroke's ID at its starting point.
         string str = s.Id.ToString();
         Point pt = s.GetPoint(0);
         renderer.InkSpaceToPixel(g, ref pt);
         g.DrawString(
            str, font, Brushes.White, pt.X-10, pt.Y-10);
         g.DrawString(
            str, font, Brushes.White, pt.X+1, pt.Y+1);
         g.DrawString(
            str, font, Brushes.Black, pt.X, pt.Y);
      }
   }
}

There are a lot of great white papers on our development center that dig into stoke handling a bit more for you to read. Check out:

On a side note—If a Strokes collection references a Stroke object that has been deleted by the Ink object it resides in, a reference to that Stroke will still exist in the collection. This is why we check for this state in the previous code snippet. We'll dig into this a bit more in upcoming columns, but if you play with strokes now you might run across this behavior and be stumped. All pen actions actually result in a Stroke object being created and the InkCollectorStrokeEventHandler event being fired. This is only true in the InkOverlay and InkPicture objects, and not always the case when working with the InkCollector or our new RealTimeStylus.

Ok, we've learned a lot so far; let's review what we've discussed. First, we know ink is collected as packets of (x, y) data points, which are indexed relative to an upper left 0,0 coordinate. These points are collected in HIMETRIC (a metric mapping mode in which each unit is equal to .01 mm) and stored within a Stroke object. A Stroke is a pen down, pen move, and pen up sequence of steps. Signing your name takes multiple strokes; hence Tablet PC puts this collection of Stroke objects into a Strokes collection. The Ink object is the primary "Document" object that contains all the Ink relative to its scope. We also learned that InkOverlay and InkCollector objects instantiate an Ink object to manage their ink.

To finish this column, we will touch on persistence of ink and the Ink Serialized Format (ISF). Ink is saved at the Ink object level by using the Ink.Save method. This method produces a byte array in one of several formats. ISF is the default mode and outputs the data in a highly compressed native binary format. The Save method takes an optional parameter, a member of the PersistenceFormat, which enables you to save ink as:

  • Ink Serialized Format— the default format mentioned previously.
  • Base64 encoded fortified Graphics Interchange Format (GIF)—this is a GIF, which is typically used for viewing ink and is often referred to as fortified Gif, as the ISF is persisted along with the image in the header. This provides round tripping of the data in the Ink object.
  • Base64 Ink Serialized Format—the ISF is Base64-encoded and used in scenarios such as XML storage.
  • Graphics Interchange Format (GIF)—the standard GIF format; usually used for simple web viewing.

You call the Ink object's Save method to build the byte array representing the ink, which can then be persisted. In the following code snippet, we take some Ink and send it to the FileStream to generate the physical file. In the example the persistence format is GIF.

private void button3_Click(object sender, System.EventArgs e)
{
   FileStream fs = new FileStream(@"..\myink.gif",FileMode.Create);
   byte[] inkBytes = inkOverlay.Ink.Save(PersistenceFormat.Gif);
   fs.Write(inkBytes, 0, inkBytes.Length);
   fs.Close();
}

It's a fairly straightforward process, as you would expect. There are probably many ideas popping into your head right now: what about database support, what about Ink interoperability across platforms, and so on... We'll examine this more in future articles. However, if you want some good side reading, I suggest the following articles up on our Developer Center:

  • Read about an architecture for building Ink support into Instant Messaging clients, so that Ink can be collected by the Windows XP Tablet PC Edition operating system, then broadcast to and rendered on other operating systems.—Building Ink Chat
  • Learn how to store and persist recognition data with ink by using the Microsoft Tablet PC Platform SDK—Persisting Ink with Attached Recognition Data
  • Learn to create applications that can store ink in a Microsoft SQL Server database and retrieve it, using the Microsoft Tablet PC Platform SDK version 1.7 API. Includes samples in Visual C# and Visual Basic .NET.—Storing Ink in a Database
  • Read about a number of strategies for deploying an application across Microsoft Windows XP and Windows XP Tablet PC Edition. Details and code examples illustrate single application deployment for multiple platforms.—Tablet PC Platform Independence

Rendering Ink

The Renderer object controls the actual drawing of Ink data to a hardware device context (HDC), be it a window, printer, or other HDC. When rendering Ink, you must keep in mind the two separate coordinate systems supported by the Tablet PC device: device coordinates and ink coordinates. The ink coordinate system is the default. The Renderer object exposes methods to convert ink coordinates to device coordinates, and vice versa. These include the InkSpaceToPixel and PixelToInkSpace methods. The Renderer object is also responsible for the actual rendering of the Ink to the HDC. The Renderer object uses the Draw and DrawStroke methods to accomplish this task. Finally, the Renderer object provides support for the manipulation of Ink data, including the transforming, scaling, repositioning, and resizing of Strokes.

Ink Recognition

The following high-level overview shows the process from capture to render to recognition.

Figure 11. High-level architecture of the recognition process.

While recognition seems complicated from a development perspective, as an application developer you can rely on some rocket science that our recognition and platform developers have done to abstract and simplify the process. Recognition is, simply stated, the process of interpreting pen movements and/or strokes. The most common type of recognition we tend to think about is the interpretation of strokes as text, known as "handwriting recognition."

There are, however, many other types of recognition, such as Gesture recognition, which interprets pen movements and strokes as gestures, such as Arrow-left, Circle, or Check. Examples of vertical applications using other types of recognition can be seen in the excellent products coming out of our ISV community. For instance, check out MathJournal from Xthink (https://www.xthink.com), and Fraction Practice or Math Practice from Jumping Minds software (https://www.jumpingminds.com).

There's a whole lot to write about this recognition, most of which is outside the scope of our 101 articles. If you're interested in more details, I suggest reading the article by Larry O'Brien on shape recognition, which you can find on the DevX portal, or the upcoming article about gestures by Mark Hopkins of the Tablet PC team.

Handwriting Recognition

Tablet PC has robust support for handwriting recognition, both in terms of implemented features and the API. Version 1.0 provided text recognizers for English (United States), English (United Kingdom), Japanese, German, French, Simplified and Traditional Chinese, as well as Korean. The 1.5 version of the SDK added Spanish and Italian recognition, as well.

Note   It is possible for you to write your own recognizer, but this is a somewhat specialized, and, usually, a rather difficult, task. Creating a Recognizer, in the Tablet PC Platform SDK documentation, has more information about writing custom recognizers.

The Tablet PC Platform provides support for ink recognition using Recognizer objects—code that computes the textual or object representation of ink strokes for one or more languages. A language is any set of words or objects that is represented by writing. English, Chinese, German, and ink-based gestures are all considered languages. Each recognizer therefore includes a property denoting the languages for which it is capable of interpreting ink strokes.

When text-based ink segments—typically a word for western languages, or a character for East Asian languages—are recognized, the Tablet PC Platform is able to provide a full data structure of the recognition results consisting of the interpreted text as well as a list of alternates in case the word or character recognition was not entirely accurate.

You can programmatically interact with these results through the API, and end-users can choose alternates by means of Graphical User Interface (GUI) components such as Tablet PC Input Panel. The architecture for the Recognizer is extensible, and a rich object model is available for the development of third-party recognizers.

There are also two ways to interact with the recognition process: synchronous and asynchronous. Synchronous recognition occurs when the thread requesting recognition results blocks all action until computation is complete. Asynchronous recognition is the process of starting a new thread to work on the recognition and allowing the user to continue working, until the recognition is complete and an event is generated notifying the application. Because performing ink-recognition is a computationally intensive task, one might decide to use an asynchronous model to allow the user to keep working while the recognition engines are working.

It's time to write some more code (I love writing code, it's a passion of mine; however, I must tell you it's nice not to be responsible any more for real production code—this Business Development gig is pretty cool ). So, let's recognize some ink as it's entered into an application.

The easiest way to perform recognition of ink into a text string is to use the Strokes.ToString method. The Stroke objects referenced in the collection are sent to the default recognizer, the recognition results are calculated, and the highest probability results that are obtained are returned as a string. Assuming you have an InkOverlay handy, this call would look like this:

        // Show the results through the Strokes.ToString method.
        MessageBox.Show(this,
            M_InkOverlay.Ink.Strokes.ToString(), "Reco Results");

I'll let you write the supporting code here to test out the ToString() call, but it really is this simple. We'll get a little creative now and write some code that explores the recognition results and allows us to see that list of other possible choices—not just the most probable, which is what ToString returns.

We're going to create a RecognizerContext object, which is the object that allows you to instantiate and work with the recognizer engines installed on the machine. As we mentioned, Windows XP Tablet PC Edition ships with many recognizers; we will be using the default recognizer in this sample.

private RecognizerContext recoContext;
private Strokes strokesToRecognize;

In the constructor of our form we add code that instantiates the RecognizerContext class, initializes the StrokesToRecognize variable with an empty Strokes collection, and then assigns it to the Strokes property of the RecognizerContext object. This assignment provides the main input to the recognizer engine. Then we wire all this up with an event handler, which is called each time the recognition engines successfully recognize some Stroke objects (asynchronous recognition, for those of you still awake).

recoContext = new RecognizerContext();
strokesToRecognize = inkOverlay.Ink.CreateStrokes();
recoContext.Strokes = strokesToRecognize;
recoContext.RecognitionWithAlternates += new RecognizerContextRecognitionWithAlternatesEventHandler(reco_RWA);

The event handler for the Stroke event adds each Stroke to the StrokesToRecognize collection as it is drawn. Then we call the BackgroundRecognizeWithAlternates method. This causes the recognizer to recognize the text as it is entered, stroke by stroke.

void InkOverlay_Stroke(object sender, InkCollectorStrokeEventArgs e) 
{
   strokesToRecognize.Add(e.Stroke);
   recoContext.BackgroundRecognizeWithAlternates();
}

Once this BackgroundRecognizeWithAlternates method is called, it starts the [background] recognition as soon as a Stroke is added.

Once the Recognizer's recognition (wow, that's a mouthful) is complete, it raises the event we hooked into early and our reco_RWA is called.

Note   If the text is not recognized, no error is raised. The event simply doesn't fire.

void reco_RWA(object sender, 
RecognizerContextRecognitionWithAlternatesEventArgs e) 
{
   textBox1.Text = e.Result.TopString;
   // The handler receives a subclassed EventArgs object that 
   // exposes the RecognitionResult object through a Result property. 
   // The RecognitionResult object has four properties: 
   // Strokes, TopAlternate, TopConfidence, and TopString. 
   // The TopString property merely returns the ToString() value 
   // of the TopAlternate object.  Finally, assign to the previously
   // declared class-level variable the RecognitionResult object
   // passed to the recognition event handler. 
   recognitionResult = e.Result;
}

Let's see this code in action (called InkReco in the sample code for this article). Note that as I ink on the form and the recognition happens, the results are shown.

Figure 12. Recognition in the sample code.

You can see in the screen shot in Figure 12 that I wrote "tablet Ink" onto the form and the recognizers correctly converted that ink to text. You've seen the code snippet and I've included this sample in the associated download for this column. Very straightforward... right, sure it is.

Now, we'll make this a little more complete and add a ListBox that we'll use to list all the possible results the Recognizer has come up with. When you see this I think the previous article on Context Tagging will make more sense; the list of alternates makes up all the probable results, so if we can hint to the recognizer through context, we help it determine the "most probable" result.

To add this new functionality to this application in order to allow it to show us the full list of possible results that the recognizers return, we'll add a list box and a new event handler that responds to your selection of the returned results. We'll add an event handler to respond to the MouseUp event on the TextBox in the application that allows you to select a single word from the recognized results, and we'll populate a list box with the full list of alternates. Here's the bit of code that pulls the recognition alternates from the recognition results:

// Listbox code to place all the possible results into the list 
private void textBox1_MouseUp(object sender, MouseEventArgs e)
{
   listBox1.Items.Clear();
   if (textBox1.SelectionLength > 0) 
   {
      RecognitionAlternates alternates = 
recognitionResult.GetAlternatesFromSelection(
         textBox1.SelectionStart, 
         textBox1.SelectionLength);
      foreach (RecognitionAlternate alternate in alternates) 
      {
         listBox1.Items.Add(alternate);
      }
      inkOverlay.Selection = alternates.Strokes;
   }
}

From this code you can see that the Recognizer holds onto the collection of possible results that we can enumerate through. This is how Input Panel shows the user alternates when they click the words in the user interface as shown in figure 13.

Figure 13. Correction user interface in Windows XP Tablet PC Edition 2005.

Let's run this new sample, which I have called InkRecoFull.

Figure 14. Recognition alternates in the sample application.

As you can see, when you highlight a word in the results TextBox, you generate the mouse event, and the recognition alternates are placed into the ListBox.

Conclusion

Well, that's about all we have time for this article. I hope the broad view at these more common touch points with the SDK prove to be helpful for you. All the samples are included, and I encourage you to try things out and let us know if you run into issues or have any questions by posting your questions to our newsgroups. More information of accessing the Tablet PC newsgroups can be found on the development center at https://msdn.microsoft.com/tabletpc.

As always remember to