Content moderation is a critical aspect of platform governance on social media and of particular relevance to addressing the belief in and spread of misinformation. However, current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative conjoint survey experiment (N=3,000) in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts (e.g., domain experts), laypeople (e.g., social media users), or non-juries (e.g., computer algorithm). We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion., Maximally legitimate layperson juries were comparably legitimate to expert panels. Republicans perceived experts as less legitimate compared to Democrats, but still more legitimate than baseline layperson juries. Conversely, larger lay juries with news knowledge qualifications who engaged in discussion were perceived as more legitimate across the political spectrum. Our findings shed light on the foundations of procedural legitimacy in content moderation and have implications for the design of online moderation systems.