Speech recognition has the potential to make technology more accessible to users. However, the accuracy of speech recognition remains limited for users with disabilities, including those with Down Syndrome, and the types and frequencies of recognition errors are poorly understood. This paper characterizes these problems, focusing on errors occurring when recognizing Down Syndrome speech. We analyze the transcripts from six speech recognition algorithms (Google, IBM, Otter.ai, Microsoft, AssemblyAI, OpenAI) using the audio content of 15 individuals with Down Syndrome (331 dialogues; 3,428 words). Our analysis shows (1) significant difference in speech recognition accuracy for people with Down Syndrome compared to neurotypical users; (2) the best algorithm for recognizing Down Syndrome speech is OpenAI (Word Accuracy = 67\%; F1-score = 0.944), and (3) there is a prevalence of deletion errors followed by substitutions and insertions. These findings have implications for enhancing speech recognition for the next-generation voice assistants to meet the needs of users with Down Syndrome.