This article argues that due to the difficulty in governing AI, it is essential to develop measures implemented early in the AI research process. The goal of dual use considerations is to create robust strategies that uphold AI’s integrity while protecting societal interests. The challenges of applying dual use frameworks to AI research are examined and dual use and dual use research of concern (DURC) are defined while highlighting the difficulties in balancing the technology’s benefits and risks. AI’s dual use potential is discussed, particularly in areas like NLP and LLMs, and the need for early consideration of dual use risks to ensure ethical and secure development is underscored. In the section on shared responsibilities in AI research and avenues for mitigation strategies the importance of early-stage risk assessments and ethical guidelines to mitigate misuse is emphasized, accentuating self-governance within scientific communities and structured measures like checklists and pre-registration to promote responsible research practices. The final section argues that research ethics committees play a crucial role in evaluating the dual use implications of AI technologies within the research pipeline. The need for tailored ethics review processes is articulated, drawing parallels with medical research ethics committees.