Recently, object affordances have moved into the focus of researchers in computer vision. Affordances describe how an object can be used by a specific agent. This additional information on the purpose of an object is used to augment the classification process. With the herein proposed approach we aim at bringing affordances and object classification closer together by proposing fine-grained affordances. We present an algorithm that detects fine-grained sitting affordances in point clouds by iteratively transforming a human model into the scene. This approach enables us to distinguish object functionality on a finer-grained scale, thus more closely resembling the different purposes of similar objects. For instance, traditional methods suggest that a stool, chair and armchair all afford sitting. This is also true for our approach, but additionally we distinguish sitting without backrest, with backrest and with armrests. This fine-grained affordance definition closely resembles individual types of sitting and better reflects the purposes of different chairs. We experimentally evaluate our approach and provide fine-grained affordance annotations in a dataset from our lab.