SUMMARY
DISCUSSION
Summary and discussion of papers in the field of Sketch Recognition.
In this paper the author presents a sketch recognition based tool for creating PowerPoint diagrams. The author has also performed evaluation of the prototype using several techniques and established design guidelines for creating SkRUIs. The author also evaluates the utility of several techniques used in iterative design of traditional user interfaces, for development of SkRUIs.
The prototype of their application supports drawing naturally on a separate window. These diagrams are then recognised by the SketchREAD recognizer and the recognized diagrams are then imported to the PowerPoint slides. The recognition is done only when the user completes sketching or when the focus is shifted from the sketching window. The system is not capable of determining automatically whether the user has finished sketching and relies on explicit feedback from the user. The system also supports a number of editing features like move and delete. It was found that providing an explicit modal switch between edit and ink gestures confused the users who often forgot to switch modes. Therefore an online edit mode was developed. The user hovered the pen over the drawn ink diagram and a subsequent cursor change indicated that the system was in edit mode. During formative evaluation users expressed the desire to add annotation symbols without them being recognised. A combo box was provided to indicate whether the recognition was ON.
The system was evaluated by testing it on users who were all graduate students. The users were asked to perform three prototypical diagram creation tasks. After completion of these tasks feedback was taken, based on which several design guidelines were established.
(1) Recognition results should be displayed only after sketching is done; (2) Provide explicit indication between free sketching and recognition. (3) Muti-domains should only be used when system is robust enough; (4)Pen based editing should be used. Sketching and editing should have clearly distinguishable gestures. (5)Large buttons should be used for pen based interface (6)Pen should respond in real time.
DISCUSSIONStructural shape descriptions either provided explicitly by the user or generated automatically by the computer are often over or under constrained. This paper describes a method to debug over and under constrained shapes in LADDER descriptions using a novel active learning technique that generates its own near missed example shapes.
LADDER based systems require the domain designer to provide shape descriptions. An intuitive way to provide description would be the draw and have the computer understand automatically. However these descriptions are often imperfect because of the inability of the computer to understand the intent of the user. The authors developed a visual debugger, that first asks the system to draw a positive example. After this the system generates near-miss examples (one additional or one less constrain) to be classified by the user as negative and positive. One the basis of user classification it removes unintended constraints and adds required constraints. But for this the system first needs to generate near miss examples. An initial set of true constraints is captured. This list is kept small and relevant using a set of heuristics. Each time a positive classification is encountered, the system removes from the list any constraint that is not true of the system. For under-constrained figures we determine a set of constraints which are not in the description. We add the negation of each constraint one by one. If the user gives a negative classification, the constraint is added. Thus the shape description is incrementally perfected.
DISCUSSION
Describing shapes by drawing them is very important from an HCI perspective. This paper provides a method fro enabling users to do this accurately. I was concerned about the size of the initial list of constraints until the authors describe a way to prune this list to include only the relevant ones.
The system also omits disjunctive constraints. A complex shape could easily consist of a Boolean combination of constraints rather then being described by individual constraints. For example two shapes which are mirror images of each other and are laterally asymmetric might need disjunctive constraints to describe them.
Purely from a UI perspective, would it be better to provide a group of shapes (say 10-15) to be classified by the user at once, rather then presenting it one by one.