Design Considerations for Gestures

Design Considerations for Gestures

Overview of design considerations for gestures.

Microsoft® recommends that you consider the following concepts when designing support for gestures in your applications. Microsoft has tested these concepts in some Tablet PC applications. In addition to following the recommendations from Microsoft, you are encouraged to explore and develop these concepts further in the design of your application.

Application Gestures and Recognition

A glyph is a series of ink segments used to symbolize an application gesture. Glyphs must be recognized uniquely from other pen actions such as mouse emulation and the laying down of ink. Sometimes an application gesture may be recognized inadvertently, without the users explicit intent to invoke it. This is especially true in user tasks where pen actions are highly variable and rich, such as laying down ink on a surface or dragging objects within the Microsoft Windows® shell. Therefore, you need to be careful when creating applications that use gestures.

The following illustration shows this overlap.

tpcsdkua_tpc_uxdg_ffpi_newgestures.jpg

The illustration shows that pen actions, ink, and application gestures are inherently mutually exclusive activities. User actions have different meanings in various user modes; whether the pen acts as a mouse, is used to apply ink, or invokes a gesture.

To avoid confusion, you must choose the mode that you want your application to operate in. There are three possible modes:

  • Ink-only
  • Gesture-only
  • Mixed

Ink-only mode treats all incoming ink as being inked content only. Gesture-only mode treats all of the incoming ink as a gesture. Mixed mode treats the incoming ink selectively, based on its shape and what the recognizers indicate. Thus, a user model that employs mixed mode may increase misrecognition. It is, however, possible to create a user model that enables mixed modes, but the actions available in a mixed mode need to be carefully chosen. For example, a back-and-forth application gesture to delete ink could be used in a note taking application that is ink-enabled. The same application gesture may not be appropriate for a drawing application, because the gesture might be confused with a motion that is intended to insert ink, such as when shading an illustration.

Gesture-only mode allows for the full experience of the power of gestures. In this mode, gestures are used freely and without any confusion of being ink or mouse actions. Such a mode may help increase productivity for users. A gesture-only mode can be as subtle as reserving some space in an application window or document for gestures. An example of this might be to use margin space as an area for gesture-only mode.

In addition to overlap between the ink and gesture modes in an application, the number of strokes needed to form an application gesture may impact effectiveness. This is because gestures may be confused with ink. The problem is particularly exacerbated in the case of multiple-stroke gestures. There may be a time delay between the strokes, and individual strokes may be confused with ink. You should therefore be careful about choosing gestures that are enabled in mixed mode. Be aware of the context within your application. Because of possible misrecognition, the gestures application programming interfaces (APIs) that are part of the Tablet PC platform do not allow for recognition of multiple stroke gestures when in mixed mode. You may, however, write your own implementation in a manner that accounts for a time delay between strokes and has a richer context for your application.

For lists of all of the gestures that the Microsoft gesture recognizer supports and some that Microsoft plans to support in the future, see System Gestures, Application Gestures, and Unimplemented Glyphs. These lists include both single-stroke and multiple-stroke gestures. Single-stroke gestures are easily recognized in both mixed and gesture-only modes. Multiple-stroke gestures are best recognized in gesture-only mode.

In addition, when designing for gestures, Microsoft recommends that you do not rely on gestures as the only way to invoke an action. This is because:

  • Gestures may be difficult to discover.
  • People with disabilities, who can use a Tablet PC by means of speech-recognition and single-tapping, may have difficulty drawing the lines and glyphs needed to invoke gestures.

To overcome these challenges, Microsoft continues to focus on increasing the accuracy of recognizing gesture glyphs as well as providing guidance on how to implement gestures. Microsoft intends to improve and develop the accuracy of gesture recognition on a best-effort basis.

Recommendations for Different Application Types

The following are specific recommendations for various application categories.

Legacy Applications

Legacy applications are built without reference to the Windows XP Tablet PC Edition and are, therefore, pen-unaware. These applications may use gestures through the Tablet PC Input Panel and need not enforce or use them within the specific application context. The Input Panel supports common editing gestures. Additionally, these applications receive all system gestures as mouse messages. For more information about system gestures, see System Gestures.

Pen-Aware Applications

Pen-aware applications enable ink, and recognize system-level pen actions and events like tap, double-tap and others. These applications can use the system gestures for system events. For application gestures, pen-aware applications can choose one of the following modes of usage:

  • Using gestures supported in the Input Panel. The application can have the gestures supported in the Input Panel as the only application gestures the application supports, or it can have these gestures in addition to any other gestures. However, the application should be clear as to how the Input Panel gestures relate to any other gestures it supports. Applications cannot modify, in any way, the gestures supported in the Input Panel or the availability of these gestures in the Input Panel.
  • Enforcing gesture-only mode. The application can enforce gesture-only mode. This can come in the form of a state in the application or by having some dedicated user interface (UI) or area in the application for gestures only. This mode and the gestures it supports should be independent of any gestures supported in the Input Panel.
  • Allowing a mixed mode. The application can allow a mixed, ink and gesture mode. Generally, this comes from enabling gestures while enabling ink. No dedicated UI or mode outside of the ink area is required for this. Gestures in this mode should be independent of any gestures supported in the Input Panel.

For details about the implementation of gestures and gesture recognizers, see Using Gestures. For more information about Tablet PC Input Panel, see Using Tablet PC Input Panel.