The use of artificial intelligence (AI) in research offers many important benefits for science and society but also creates some novel and complex ethical issues. While the issues raised by AI use will not necessitate a radical change in the established ethical norms of science, they will require the scientific community to develop new guidance for the appropriate use of AI. In this article, we provide a brief introduction to AI and how it can be used in research, examine some of the ethical issues raised by using AI in research, and offer recommendations for appropriate use of this technology. We recommend that: 1) researchers and software developers are responsible for identifying, describing, reducing, and controlling AI-related biases and random errors; 2) researchers should disclose and explain their use of AI in language that can be understood by non-experts; 3) if appropriate, researchers should engage with impacted communities, populations, and other stakeholders concerning the use of AI in research to obtain their advice and assistance and address their interests and concerns; 4) researchers may be liable for misconduct if they intentionally, knowingly, or recklessly use AI to fabricate or falsify data or commit plagiarism; 5) AI systems should not be named authors, inventors, or copyright holders but their contributions to research should be disclosed and described; 6) AI systems should not be used in situations that may involve unauthorized disclosure of confidential information related to human research subjects, unpublished research, potential intellectual property claims, or proprietary or classified research; and 7) education and mentoring in responsible conduct of research should include discussion of ethical use of AI.