The recent release of ChatGPT has gained huge attention and discussion worldwide, with responsible AI being a crucial topic of discussion. One key question is how we can ensure that AI systems, like ChatGPT, are developed and adopted in a responsible way? To tackle the responsible AI challenges, various ethical principles have been released by governments, organisations, and companies. However, those principles are very abstract and not practical enough. Further, significant efforts have been put on algorithm-level solutions that only address a narrow set of principles, such as fairness and privacy. To fill the gap, we adopt a pattern-oriented responsible AI engineering approach and build a Responsible AI Pattern Catalogue to operationalise responsible AI from a system perspective. In this article, we first summarise the major challenges in operationalising responsible AI at scale and introduce how we use the Responsible AI Pattern Catalogue to address those challenges. We then examine the risks at each stage of the chatbot development process and recommend pattern-driven mitigations to evaluate the the usefulness of the Responsible AI Pattern Catalogue in a real-world setting.