ChatGPT has shown promise in assisting qualitative researchers with coding. Previous efforts have primarily focused on datasets derived from interviews and observations, leaving document analysis, another crucial data source, relatively unexplored. In this project, we address the rapidly emerging topic of disinformation regulatory policy as a pilot to investigate ChatGPT's potential for document analysis. We adapt our existing qualitative research framework, which identifies five key components of disinformation policy: context, actors, issue, instrument, and channel, to sketch out policy documents. We then designed a two‐stage experiment employing a multi‐layer workflow using a dataset with highly relevant policy documents from US federal government departments. Through iteratively developing and refining six different prompt strategies, we identified an effective few‐shot learning strategy that achieved 72.0% accuracy and a 70.8% F‐score with the optimal prompt. Our experimental process and outcomes explore the feasibility of using ChatGPT to support manual coding for policy documents and suggest a coding approach for conducting explicit document analysis through an interactive process between researchers and ChatGPT. Furthermore, our results initiate a wider debate on how to integrate human logic with ChatGPT logic, along with the evolving relationship between researchers and AI tools.