Grasp affordances in robotics represent different ways to grasp an object involving a variety of factors from vision to hand control. A model of grasp affordances that is able to scale across different objects, features and domains is needed to provide robots with advanced manipulation skills. The existing frameworks, however, can be difficult to extend towards a more general and domain independent approach. This work is the first step towards a modular implementation of grasp affordances that can be separated into two stages: approach to grasp and grasp execution. In this study, human experiments of approaching to grasp are analysed, and object-independent patterns of motion are defined and modelled analytically from the data. Human subjects performed a specific action (hammering) using objects of different geometry, size and weight. Motion capture data relating the hand-object approach distance was used for the analysis. The results showed that approach to grasp can be structured in four distinct phases that are best represented by non-linear models, independent from the objects being handled. This suggests that approaching to grasp patterns are following an intentionally planned control strategy, rather than implementing a reactive execution.