Of Tools, modes and bars

A rationale for tools, modes. toolbars and status bars organization in  GUI component development tools
Thomas Baudel ~ written around 1999.


This documents presents a brief description of what tools and interactors are, a taxonomy of tools and what are their purpose.

I What is a Tool ? a Mode, an Interactor ?

Because computers are multipurpose tools, the scope of user interactions is not predetermined. This means that the available input devices must serve different purposes at different times in different application contexts, or even within an application context. A mode is a state of the user interface that allows to determine the meaning of a user action at a given point in time, within the context of an application. One can also present a mode as a multiplexer of user input that allows giving one precise meaning to base raw events such as mouse moves and key presses. (Note: a user action is here assimilated to an event, not to a user perceived action, that is represented by the Action class in an application).

One should first distinguish between spatial modes and temporal modes (spatial multiplexing of input vs temporal multiplexing). Keys on the keyboard, selection handles in a drawing editor and toolglasses are examples of spatial modes. A spatial mode uses the location of a user's action to determine the meaning of an event. A temporal mode uses the ordering of events in time to determine their meaning, more precisely relying on some key sequences to determine the meaning of the following events. Drawing tools in an editor are examples of temporal modes: one selects a tool, such as a selection tool, and further button presses are interpreted as selection and move actions.

Immediately, one can see a drawback of temporal modes: since they have no intrinsic feedback, one must remember what mode they're in to predict the outcome of their actions. While it seems trivial to remember that we just selected a tool, mode errors are proved to be one of the most frequent source of user confusion in an application, and a major slow down factor in using and learning an application. Now, of course, this does not mean that we can avoid using temporal modes, just that temporal modes must :

An interactor is the application's representation of a temporal mode. It presents itself as a finite state automata:
class Interactor {
// all functions are virtual
    void start(Context&)
    void end(Context&)
    void handleEvent(Event&, Context&)
// possibly submethods such as:
    void handleKey(key&)
    void handlePress(button&)
    void handleMove(point&)
    void handleRelease(button&)  and so on...
  enum currentState { the list of existing states for the given interactor } _state;
When installed, the interactor will take over user input to transform it into Actions. One of this Action can be the termination of the given interaction by installing a new interactor.

Of course, the model of interactors can be made recursive, defining submodes within a global mode. This is typically what happens when embedding spatial modes (such as selection handles) into a temporal mode.

A tool is an interactor handle. It is the element of the user interface that gives access to the given interactor (allows to start its initial instance) and provides feedback for it.

Taxonomy of tools for direct manipulation

Direct manipulation, a term coined by Ben Shneidermann, describes the fundamental structure and properties that make the superior usability of GUIs over other interaction paradigms such as command-line or pure menu based & form filling interfaces. A direct manipulation interface has to feature:
  1. Permanent representation of the objects of interest.
  2. Fast, incremental, reversible user interaction.
  3. Representation of the available actions on the objects of interest.
While looking simple, this definition implies a lot of requirements on the structure of the application. 1) means that one should use an object oriented data structure of graphic objects that will match the data structure of the application, and keep them in sync. 3) means a menu bar or other visual representation for the available actions. 2) means the requirement of a command history handling architecture + some constraints on the design of commands and actions:

Essentially 2) has the implication in current GUIs that commands must have the following structure:

[arguments] objects, verb. or objects, verb [arguments].

Studies on command syntax have shown that, for direct manipulation, this order is preferable over [verb, arguments, object] (as used in command line interfaces) and [verb, object, arguments] or as found in many CAD programs whose workflow was designed in the 80's. The location of the arguments is yet problematic and depends on the representability of said arguments. Not e that the distinction between a 'lead' selected object among the selection of objects greatly simplifies the argument description problem.

All actions that match this syntax are therefore implemented by constructing the interaction around the notion of a selection tool and a palette of actions that can be performed on the selection. This implies there is only need for two types of tools: Creation and Selection. And indeed, creation tools are not even absolutely needed (Maya has gotten rid of most of them, for instance).

Selection however can be quite complex: one may want to select objects, objects components (which depend according to the nature of the object. They can be points, characters, or gadget items, and object relationships (such as links and connections).

An alternative way to present the various selection tools is to is to consider that one has only one tool, that allows selection and the specification of 2D parameters (serving as arguments to commands that use points as arguments: move, resize...). View options enable one to view specific components of objects. Those view options drive a filter that allows the selection of components vs. objects.

A tool can be transient (it reverts to another tool after completion of the interactor), or permanent: it reinstantiate the interactor after their completion. Better yet, one should provide a mean (shift click on a tool, for instance), to make transient tools permanent.

Types of tools

From the above considerations, one can identify for the need of the following tools in Studio:

- selection/edition tools
    object selection/edition
    component selection: one for each type of component. in Views, this means
        - points selection/edition
        - text selection/edition -> may enter a full text editor mode, remapping keyshortcuts and so on.
            this requires thinking of providing very clear feedback.
    possibly other types of objects, such as:
        - guides
        - focus arrows
        - connections (in prototypes)
        - links (in the grapher).
        - other types of components (gadgets items...)
    these special types of options can be seen

- creation tools
    - rectangular objects (all gadgets)
    - lines
    - polylines
    - bezier and other splines

- navigation tools
    - zoom in/out
    - translation

- active mode

- misc tools, or articulated tools

    in complex CAD programs, there is a justification for those tools.
    gradient tools, extrusion... although most of the time, they can be nicely replaced with spatial modes and display options.

Menubars, Toolbars & Status bars

There is no rationale for the 'action bars'  that provide access to 'save', 'undo' and other immediate actions that don't hold status information. They consume valuable screen real estate, are indeed slower to hit than menus/shortcuts. Yet we acknowledge that since they've become current, we can keep the current existing common toolbars
Page created in 1999. Thomas Baudel