The increasing adoption of large language models (LLMs) in healthcare presents both opportunities and challenges. While LLM-powered applications are being utilized for various medical tasks, concerns persist regarding their accuracy and reliability, particularly when not specifically trained on medical data. Using open-source models without proper fine-tuning for medical applications can lead to inaccurate or potentially harmful advice, underscoring the need for domain-specific adaptation. Therefore, this study addresses these issues by developing PharmaLLM, a fine-tuned version of the open-source Llama 2 model, designed to provide accurate medicine prescription information. PharmaLLM incorporates a multi-modal input/output mechanism, supporting both text and speech, to enhance accessibility. The fine-tuning process utilized LoRA (Low-Rank Adaptation) with a rank of 16 for parameter-efficient fine-tuning. The learning rate was maintained at 2e-4 for stable adjustments, and a batch size of 12 was chosen to balance computational efficiency and learning effectiveness. The system demonstrated strong performance metrics, achieving 87% accuracy, 92.16% F1 score, 94% sensitivity, 66% specificity, and 90% precision. A usability study involving 33 participants was conducted to evaluate the system using the Chatbot Usability Questionnaire, focusing on error handling, response generation, navigation, and personality. Results from the questionnaire indicated that participants found the system easy to navigate and the responses useful and relevant. PharmaLLM aims to facilitate improved patient-physician interactions, particularly in areas with limited healthcare resources and low literacy rates. This research contributes to the advancement of medical informatics by offering a reliable, accessible web-based tool that benefits both patients and healthcare providers.