Recent years have witnessed the great successes of embeddingbased methods in recommender systems. Despite their decent performance, we argue one potential limitation of these methods -the embedding magnitude has not been explicitly modulated, which may aggravate popularity bias and training instability, hindering the model from making a good recommendation. It motivates us to leverage the embedding normalization in recommendation. By normalizing user/item embeddings to a specific value, we empirically observe impressive performance gains (9% on average) on four real-world datasets. Although encouraging, we also reveal a serious limitation when applying normalization in recommendation -the performance is highly sensitive to the choice of the temperature ๐ which controls the scale of the normalized embeddings.To fully foster the merits of the normalization while circumvent its limitation, this work studied on how to adaptively set the proper ๐. Towards this end, we first make a comprehensive analyses of ๐ to fully understand its role on recommendation. We then accordingly develop an adaptive fine-grained strategy Adap-๐ for the temperature with satisfying four desirable properties including adaptivity, personalized, efficiency and model-agnostic. Extensive experiments have been conducted to validate the effectiveness of the proposal.