We consider a centralized caching network, where a server serves several groups of users, each having a common shared homogeneous fixed-size cache and requesting arbitrary multiple files. An existing coded prefetching scheme is employed where each file is broken into multiple fragments and each cache stores multiple coded packets each formed by XORing fragments from different files. For such a system, we propose an efficient file delivery scheme with explicit constructions by the server to meet the arbitrary multi-requests of all user-groups. Specifically, the stored coded packets of each cache are classified into four types based on the composition of the file fragments encoded. A delivery strategy is developed, which separately delivers part of each packet type first and then combinatorially delivers the remaining different packet types in the last stage. The rate as well as the worst rate of the proposed delivery scheme are analyzed. We show that our caching model and delivery scheme can incorporate some existing coded caching schemes as special cases. Moreover, for the special case of uniform requests and uncoded prefetching, we make a comparison with existing results, and show that our approach can achieve a lower delivery rate. We also provide numerical results on the delivery rate for the proposed scheme., r = 1, 2, ..., K, which locate in the regime when each cache size is not greater than the total source-file size, i.e., 0 ≤ C ≤ N with N denoting the total file number.Later [12] showed that such codes used in [11] can be simply replaced by the XOR codes. It is shown that the rate-memory pair of [10] can be viewed as a special case of [11], and according to [12] the scheme in [11] can outperform that in [8] within the small cache-size regime when the total cache size of the network is less than the total source-file size, i.e., 0 ≤ C < N K , where [8] proposed an uncoded prefetching scheme for K cache-size points at C = tN K , t = 1, 2, ..., K over 0 ≤ C ≤ N based on [1] and is shown to be optimal in the regime when N K ≤ C ≤ N . Since coded prefetching can achieve better performance in small cache-size regime, [5] proposed 3 a coded prefetching scheme for a cache-size point at C = N −1 K and [14] proposed a coded prefetching scheme for N more cache-size points at C = N Kα , α = 1, 2, ..., N over 0 ≤ C ≤ N K . It is shown that [14] can include coded prefetching [10], [11] at C = 1 K and uncoded prefetching [8] at C = N K as special cases, and can further improve coded prefetching performance over such small cache-size regime [11], [15]. However, all the aforementioned coded prefetching schemes are only applicable to the one-user-per-cache network, where each user can only make a single request. To the best of our knowledge, there are few works considering coded prefetching for multiple requests. Note that [16], [17] investigated single-layer coded caching with multiple requests and [18], [19] investigated hierarchical coded caching with multiple requests, all of which address uniform requests for uncoded...