Vehicular Edge Computing (VEC) technology holds great promise, but also poses significant challenges to the limited computing power of in-vehicle devices and the capacity of Roadside Units (RSUs). At the same time, the highly mobile nature of vehicles and the frequent changes in the content of requests from vehicles make it critical to offload applications to edge servers and to effectively predict and cache the most popular content, so that the most popular content can be cached in advance in the RSU. And also considering protecting the privacy of vehicle user vehicular users (VUs), traditional data-sharing methods may not be suitable for this work, so we use an asynchronous Federated learning (FL) approach to update the global model in time and at the same time can protect the personal privacy of VUs. Unlike the traditional synchronous FL, asynchronous FL no longer needs to wait for all vehicles to finish training and uploading local models before updating the global model, which avoids the problem of long training time. In this paper, we propose an in-vehicle edge computing caching scheme based on asynchronous federated learning and deep reinforcement learning(AFLR), which prefetches possible popular contents in advance and caches them in the edge nodes or vehicle nodes according to the vehicle's location and moving direction while reducing the latency of the content requests. After extensive experimental comparisons, the AFLR scheme outperforms other benchmark caching schemes.INDEX TERMS caching, asynchronous federated learning, vehicular edge computing, deep reinforcement learning.