To reduce the spread of misinformation, social media platforms may take enforcement actions against offending content, such as adding informational warning labels, reducing distribution, or removing content entirely. However, both their actions and their inactions have been controversial and plagued by allegations of partisan bias. The controversy in part can be explained by a lack of clarity around what actions should be taken, as they may not neatly reduce to questions of factual accuracy. When decisions are contested, the legitimacy of decision-making processes becomes crucial to public acceptance. Platforms have tried to legitimize their decisions by following well-defined procedures through rules and codebooks. In this paper, we consider an alternate source of legitimacy -the will of the people. Surprisingly little is known about what ordinary people want the platforms to do about specific content. We provide empirical evidence about lay raters' preferences for platform actions on 368 news articles. Our results confirm that on many items there is no clear consensus on which actions to take. There is no partisan difference in terms of how many items deserve platform actions but liberals do prefer somewhat more action on content from conservative sources, and vice versa. We find a clear hierarchy of perceived severity, with inform being the least severe action, followed by reduce, and then remove. We also find that judgments about two holistic properties, misleadingness and harm, could serve as an effective proxy to determine what actions would be approved by a majority of raters. We conclude with the promise of the will of the people while acknowledging the practical details that would have to be worked out.