Multi-instance multi-label learning (MIML) is a new machine learning framework where one data object is described by multiple instances and associated with multiple class labels. During the past few years, many MIML algorithms have been developed and many applications have been described. However, there lacks theoretical exploration to the learnability of MIML. In this paper, through proving a generalization bound for multi-instance single-label learner and viewing MIML as a number of multi-instance single-label learning subtasks with the correlation among the labels, we show that the MIML hypothesis class constructed from a multi-instance single-label hypothesis class is PAC-learnable. Multi-instance multi-label learning (MIML) [1, 2] is a new machine learning framework. In contrast to traditional supervised learning where one data object is represented by one instance and associated with one class label, in MIML one object is described by multiple instances and associated with multiple class labels (Figure 1). Such a framework is particularly useful for handling complicated data objects with multiple semantic meanings. For example, in image annotation, an image contains many patches each can be represented by an instance, while the image can be assigned with multiple annotation terms simultaneously; in text categorization, one document contains multiple sections each can be represented by an instance, while the document can be classified into multiple categories simultaneously.Formally, let X and Y denote the instance space and the set of class labels, respectively. The task of MIML is to learn a function f : 2 X → 2 Y from a given data setis a set of labels {y i1 , y i2 , · · · , y i,l }. n i denotes the number of instances in X i , l denotes the number of candidate labels and y ik ∈ {−1, +1} (k = 1, 2, · · · , l). X i is also called as a bag (of instances) and let D denote the distribution over the bags.