Ambiguity is inherent to open-domain question answering; especially when exploring new topics, it can be difficult to ask questions that have a single, unambiguous answer. In this paper, we introduce AMBIGQA, a new open-domain question answering task which involves finding every plausible answer, and then rewriting the question for each one to resolve the ambiguity. To study this task, we construct AMBIGNQ, a dataset covering 14,042 questions from NQ-OPEN, an existing opendomain QA benchmark. We find that over half of the questions in NQ-OPEN are ambiguous, with diverse sources of ambiguity such as event and entity references. We also present strong baseline models for AMBIGQA which we show benefit from weakly supervised learning that incorporates NQ-OPEN, strongly suggesting our new task and data will support significant future research effort. Our data and baselines are available at https://nlp.cs. washington.edu/ambigqa.