Implicit motives, non-conscious needs that influence individuals’ behavior and shape their emotions, have been part of personality research for nearly a century but are divergent from personality traits. The implicit motive assessment is very resource-intensive, involving expert-coding individuals’ written stories about ambiguous pictures, which has hampered implicit motive research. Using Large Language Models and machine learning techniques, we aimed to create high-quality implicit motive models that are easy for researchers to use. We trained models to code the need for power, achievement, and affiliation (85,028 sentences). The cross-validated person-level predictions converged strongly with the human coders in the training set (ICC = .83, .86, and .88 for achievement, power, and affiliation, respectively) and generalized well to the test set (ICC = .85, .87, and .89 for achievement, power, and affiliation, respectively). We demonstrated causal validity by replicating two classical experimental studies that aroused implicit motives. Finally, we let three coders re-code sentences where our implicit motives models and the original coders strongly disagreed. We found that the new coders agreed with our models in 83% of the cases (p < .001, φ = .67). Models with such high quality can be used in complement, or instead of, human coders. We provide a free, user-friendly framework in the established R-package text and a tutorial for researchers to apply the models to their data, which reduces the coding time by over 99% and requires no cognitive effort. We hope this coding automation will facilitate a historical implicit motive research renaissance.