We present an efficient and geometrically intuitive algorithm to reliably interpret the image velocities of moving objects in 3D. It is well known that under weak perspective the image motion of points on a plane can be characterised by an affine transformation. We show that the relative image motion of a nearby noncoplanar point and its projection on the plane is equivalent to motion parallax and because it is independent of viewer rotations it is a reliable geometric cue to 3D shape and viewer/object motionIn particular we show how to interpret the motion parallax vector of non-coplanar points (and contours) and the curl, divergence and deformation components of the affine transformation (defined by the three points or a closed-contour of the plane) in order to recover the projection of the axis of rotation of a moving object; the change in relative position of the object; the rotation about the ray; the tilt of the surface and a one parameter family of solutions for the slant as a function of the magnitude of the rotation of the object. The latter is a manifestation of the bas-relief ambiguity. These measurements, although representing an incomplete solution to structure from motion, are the only subset of structure and motion parameters which can be reliably extracted from two views when perspective effects are small.We present a real-time example in which the 3D visual interpretation of hand gestures or a hand-held object is used as part of a man-machine interface. This is an alternative to the Polhemus coil instrumented Dataglove commonly used in sensing manual gestures.