Federated Learning (FL) is a distributed machine learning paradigm that enables multiple data holders to collaborate on building machine learning models while preserving the privacy of their data. FL can be categorized as horizontal or vertical, depending on the distribution characteristics of the data. Specifically, horizontal FL uses data partitioned in the sample space, whereas vertical FL uses data partitioned in the feature space. Traditional vertical FL methods aim to facilitate collaboration among clients to infer a single global target. However, these methods may be impractical because each client often has a unique target to be inferred. In this paper, we propose a novel vertical FL method, called personalized vertical federated learning (pvFed), which addresses this limitation by allowing each client to perform inferences specific to their individual task. To the best of our knowledge, no existing method currently addresses this limitation. The objective of pvFed is to construct a global model that generates a representation vector to support client inference. The global model, constructed using distillation and dimensionality reduction, takes a sample ID common to all clients as input and outputs a sample‐specific representation vector. Clients utilize the intermediate representation of their own model and the representation vectors output by the global model for inference. Because these vectors are not dependent on client‐specific tasks, clients can repurpose them for any additional tasks. Our experiments, conducted on two distinct data types—image and tabular data sets, under a vertical partitioning where each client had its own specific task, demonstrated the efficacy of vectors generated by the global model in pvFed for client inference. © 2024 Institute of Electrical Engineer of Japan and Wiley Periodicals LLC.