The landscape of human-machine interaction is undergoing a transformation with the integration of conversational technologies. In various domains, Large Language Model (LLM) based chatbots are progressively taking on roles traditionally handled by human agents, such as task execution, answering queries, offering guidance, and delivering social and emotional assistance. Consequently, enhancing user satisfaction with these technologies is crucial for their effective incorporation. Emotions indeed play an effective role in responses generated by reinforcement-learning-based chatbots. In text-based prompts, emotions can be signaled by visual (emojis, emoticons) and linguistic (misspellings, tone of voice, word choice, sentence length, similes) aspects. Therefore, researchers are harnessing the power of Artificial Intelligence (AI) and Natural Language Processing techniques to imbue chatbots with emotional intelligence capabilities. This research aims to explore the impact of feeding contradicting emotional cues to the LLMs through different prompting techniques. The evaluation is based on specified instructions versus provided emotional signals. Each prompting technique is scrutinized by inducing a variety of emotions on widely used LLMs, ChatGPT 3.5 and Gemini. Instead of automating the prompting process, the prompts are given by exerting cognitive load to be more realistic regarding Human-Computer Interaction (HCI). The responses are evaluated using human-provided qualitative insights. The results indicate that simile-based cues have the highest impact in both ChatGPT and Gemini. However, results also conclude that the Gemini is more sensitive towards emotional cues. The finding of this research can benefit multiple fields: HCI, AI Development, Natural Language Processing, Prompt Engineering, Psychology, and Emotion analysis.