This page has been archived by the Computer Arts Society.

Gibson’s tool functions applied to the computer


The entirety of the interface, including the mouse, icons, tools and cursors, combines to form a visual means of manipulating the computer. McCullough looks back to MacPaint and MacDraw, early ancestors of Photoshop and Illustrator respectively, and notes that: “Here the graphical objects first developed grips and intrinsic operations, such as selecting, stretching, and replicating” [p23]. Of course, this was over twenty years later than Ivan Sutherland’s Sketchpad, which integrated similar features from the very start of real-time drawing and manipulation. The important point is that the objects and tools presented in computer graphics applications are represented by graphical attributes – icons and objects – that associate the tool with its function. Moreover, each tool has a range of options associated with it; in Photoshop the Lassoo function for selecting irregular areas alternates with a straight-edged version for geometric shapes. These “intrinsic operations” – tool functions suggested by their graphical attributes alone – call to mind J.J. Gibson’s thoughts on the role of tools in an environment.

Gibson saw tools as “a sort of an extension of the hand, almost an attachment to it or a part of the user’s own body”. As an attachment, the tool was no longer an external feature of its user’s environment. However, when not in use it resumed its status as a feature of the environment, though one with recognised “affordances” that marked it out as a potential tool. Gibson held that this capacity to be attached to the human body and extend its functions “suggests that the boundary between the animal and the environment is not fixed at the surface of the skin but can shift.”[1]

This concept of attaching and affording applies to computer tools in their role of extending, or even representing, the user in an environment they cannot physically touch. The sensation of using the mouse gives a basic and approximate physical aspect to this act. The physical point of interaction is the mouse, stylus or trackball; and its representative on the screen is the cursor. Stephen Ellis argues that this relationship makes the cursor itself a kind of “self”, a literal extension of the artist which is located at a distance from them: “…the cursor is a kind of self in that it specifies the locus of immediate interaction with the user”.[2] This has important consequences for the use for realtime tools in Computer Art.

Gibson’s idea of “affordances” is also relevant to the computer’s role in art. Briefly, Gibson thought that each tool afforded certain possibilities of usage that humans recognised in its external form, and associated with its properties. The curious situation with a computer’s hardware interfaces is that their external form remains the same, but their properties and abilities vary greatly according to the program used. So the user’s concept of their “affordances” is determined partly by the program’s flexibility and partly by the user’s familiarity with it.

In physical reality, a tool can be used in many ways, yet we progress from general towards specialised tools, and multipurpose tools may have many attachments or configurations to change their application. Although in principle any tool’s uses are infinite, in practice they have a limited range of applications and situations: they are constructed with specific aims in mind. For the artist also, though they may turn any tool to an artistic end, implements usually have specific aims and are made – or adapted – to fit these.

By contrast, with the GUI we have a standardised piece of physical hardware (the mouse) controlling a multiplicity of tool-functions. These may indicate their changing status through a variety of pointer icons (esp in drawing programs, where they are of vital importance to indicate various functions)

McCullough sees digital “craft” stemming from the introduction of mouse-based systems such as the Apple Macintosh. He identifies the pointer as an essential component of this revolution because it allowed “direct manipulation” (a term invented by software designer Ben Schneider in 1983 – McC, p23) of the pixels onscreen, which led to more than simply dragging files around a desktop:

[...] the expression referred to the combination of three fundamental activities:

(1) continuous visibility of the object of interest; (2) rapid, incremental, reversible, physical actions on the object; and (3) immediately visible results. The slogan “What you see is what you get” popularized the essence of this technical combination, but hand-eye coordination meant more than just visual fidelity. [p23]

The three activities listed are all essential to real-time interaction with a computer display, but most germane to any discussion of Computer Art is point (2). “Rapid”, “incremental” and “physical” actions were all impossible in the days when an image had to be programmed into the computer, and all are necessary to the direct interaction one requires when building an image on-screen. “Physical” actions are perhaps of most importance to the creation of Computer Art, since it is in the transmission of image-creating movements through the hand, via the mouse/stylus and into the computer that most digital artists create their work today.

[1] JJ Gibson Environmental Perception, (1979, ) p41

[2] Stephen R. Ellis, Pictorial Communication in Virtual and Real Environments, p9