In this study, we introduce transition density function expansion methods inspired from Yang et al. (J Econom. 2019;209(2):256–288.) to stochastic control issues related to utility maximization, without imposing limitations on the variety of asset price models and utility functions. Utilizing Bellman's dynamic programming principle, we initially recast the conditional expectation via the transition density function pertinent to the diffusion process. Subsequently, we employ the Itô‐Taylor expansion and Delta expansion techniques to the transition density function associated with the multivariate diffusion process, facilitated by a quasi‐Lamperti transformation, aiming to derive explicit recursive expressions for expansion coefficient functions. Our main contributions are that we articulate detailed algorithms, stemming from the backward recursive formulations of the value function and optimal strategies, achieved through discretization methodologies with rigorous proof of expansion convergence in portfolio optimization. Both theoretical and practical demonstrations are presented to validate the convergence of these approximate techniques in addressing stochastic control challenges. To underscore the efficiency and precision of our proposed methods, we apply them to portfolio selection problems within several benchmark models, and highlight the reduced complexity in comparison to the current methodologies.