Monitoring the evolution of the state of networks is an important issue to ensure that many applications provide the required quality of service. The first step in network-monitoring systems consists of capturing packets; that is, packets arrive at the system through a network interface card and are placed into system memory. Then, in this first stage, and usually in relation to the operating system, packets are treated and transferred from the capturing buffer to a higher-layer processing, for instance, to be analyzed in the next step of the system. In this work, we focus on the capturing stage. In particular, we focus on a Linux packet-capturing system. We model it as a single server queue. Taking into account that the server can be in charge not only of the capturing process but also of other tasks, we consider that the queue has vacations, i.e., there is some time when the capturing process cannot be carried out. We also assume that the queue has a finite buffer. We consider three different models and present a rigorous analysis of the derived Markov chain of each of the models. We provide standard performance metrics in all cases. We also evaluate the performance of these models in a real packet-capture probe.