BackgroundSkin tone and pigmented regions, associated with melanin and hemoglobin, are critical indicators of skin condition. While most prior research focuses on pigment analysis, the capability to simulate diverse pigmentation conditions could greatly broaden the range of applications. However, current methodologies have limitations in terms of numerical control and versatility.MethodsWe introduce a hybrid technique that integrates optical methods with deep learning to produce skin tone and pigmented region‐modified images with numerical control. The pigment discrimination model produces melanin, hemoglobin, and shading maps from skin images. The outputs are reconstructed into skin images using a forward problem‐solving approach, with model training aimed at minimizing the discrepancy between the reconstructed and input images. By adjusting the melanin and hemoglobin maps, we create pigment‐modified images, allowing precise control over changes in melanin and hemoglobin levels. Changes in pigmentation are quantified using the individual typology angle (ITA) for skin tone and melanin and erythema indices for pigmented regions, validating the intended modifications.ResultsThe pigment discrimination model achieved correlation coefficients with clinical equipment of 0.915 for melanin and 0.931 for hemoglobin. The alterations in the melanin and hemoglobin maps exhibit a proportional correlation with the ITA and pigment indices in both quantitative and qualitative assessments. Additionally, regions overlaying melanin and hemoglobin are demonstrated to verify independent adjustments.ConclusionThe proposed method offers an approach to generate modified images of skin tone and pigmented regions. Potential applications include visualizing alterations for clinical assessments, simulating the effects of skincare products, and generating datasets for deep learning.