ery Auto-Completion (QAC) is a widely used feature in many domains, including web and eCommerce search. is feature suggests full queries based on a pre x of a few characters typed by the user. QAC has been extensively studied in the literature in the recent years, and it has been consistently shown that adding personalization features can signi cantly improve the performance of the QAC model. In this work we propose a novel method for personalized QAC that uses lightweight embeddings learnt through fastText [2, 14]. We construct an embedding for the user context queries, which are the last few queries issued by the user. We also use the same model to get the embedding for the candidate queries to be ranked. We introduce ranking features that compute the distance between the candidate queries and the context queries in the embedding space. ese features are then combined with other commonly used QAC ranking features to learn a ranking model using the state of the art LambdaMART ranker [3]. We apply our method to a large eCommerce search engine (eBay) and show that the ranker with our proposed feature signi cantly outperforms the baselines on all of the o ine metrics measured, which includes Mean Reciprocal Rank (MRR), Success Rate (SR), Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG). Our baselines include the Most Popular Completion (MPC) model which is a commonly used baseline in the QAC literature, as well as a ranking model without our proposed features.e ranking model with the proposed features results in a 20 − 30% improvement over the MPC model on all metrics. We obtain up to a 5% improvement over the baseline ranking model for all the sessions, which goes up to about 10% when we restrict to sessions that contain the user context. Moreover, our proposed features also signi cantly outperform text based personalization features studied in the literature before, and adding text based features on top of our proposed embedding based features results only in minor improvements.