Systematic reviews (SRs) are the highest standard of evidence, shaping clinical practice guidelines, policy decisions, and research priorities. However, their labor-intensive nature, including an initial rigorous article screen by at least two investigators, delays access to reliable information synthesis. Here, we demonstrate that large language models (LLMs) with intentional prompting can match human screening performance. We introduce Framework Chain-of-Thought, a novel prompting approach that directs LLMs to systematically reason against predefined frameworks. We evaluated our prompts across ten SRs covering four common types of SR questions (i.e., prevalence, intervention benefits, diagnostic test accuracy, prognosis), achieving a mean accuracy of 93.6% (range: 83.3-99.6%) and sensitivity of 97.5% (89.7-100%) in full-text screening. Compared to experienced reviewers (mean accuracy 92.4% [76.8-97.8%], mean sensitivity 75.1% [44.1-100%]), our full-text prompt demonstrated significantly higher sensitivity in four reviews (p<0.05), significantly higher accuracy in one review (p<0.05), and comparable accuracy in two of five reviews (p>0.05). While traditional human screening for an SR of 7000 articles required 530 hours and $10,000 USD, our approach completed screening in one day for $430 USD. Our results establish that LLMs can perform SR screening with performance matching human experts, setting the foundation for end-to-end automated SRs.