This paper presents the Overall Prompting Effectiveness (OPE) framework, designed to enhance interactions with systems employing generative AI(genAI) through prompt engineering. Leveraging principles from Total Productive Maintenance (TPM): Overall Equipment Effectiveness (OEE), the OPE framework provides a systematic tool for iterative refinement of prompts, crucial for optimising human-AI interactions in cyber-physical-social-human systems. Evaluating prompting results in three actionable categories: Availability, Performance, and Quality, OPE aims to generate more accurate, relevant, and effective AI responses. Preliminary applications and workshops with prompt engineers underscore the potential of OPE to significantly improve the efficiency and effectiveness of AI engagements. The integration of OPE within Large Language Models (LLMs) or other genAI systems promises to expand automation capabilities, enhancing the intellectualisation and personalisation of CPS. This framework fosters continuous improvement in prompt quality and contributes to the synthetic knowledge management and smart reasoning mechanisms essential for advanced CPS. Our findings suggest that OPE could play a significant role in cognitive engineering, offering a novel methodology for enhancing decision-making and problem-solving in intelligent systems. This aligns with the current research directions in CPS, emphasising the need for tools supporting technology's socialisation and personalisation in complex integrated environments.