Digital twins, computational representations of individuals or systems, offer promising applications in the intensive care unit (ICU) by enhancing decision-making and reducing cognitive load. We developed digital twins using a large language model (LLM), LLaMA-3, fine-tuned with Low-Rank Adapters (LoRA) on physician notes from different ICU specialties in the MIMIC-III dataset. This study hypothesizes that specialty-specific training improves treatment recommendation accuracy compared to training on other ICU specialties. Additionally, we evaluated a zero-shot baseline model, which relied solely on contextual instructions without training. Discharge summaries were analyzed, and medications were masked to create datasets for model training and testing. The medical ICU dataset (1,000 notes) was used for evaluation, and performance was measured using BERTScore and ROUGE-L. LLMs trained on medical ICU notes achieved the highest BERTScore (0.842), outperforming models trained on other specialties or mixed datasets, while untrained zero-shot models showed the lowest performance. These results underscore the value of context-specific training for digital twins, offering foundational insights into LLMs for personalized clinical decision support.