This study investigates the feasibility of employing artificial intelligence and large language models (LLMs) to customize closed captions/subtitles to match the personal needs of deaf and hard of hearing viewers. Drawing on recorded live TV samples, it compares user ratings of caption quality, speed, and understandability across five experimental conditions: unaltered verbatim captions, slowed-down verbatim captions, moderately and heavily edited captions via ChatGPT, and lightly edited captions by an LLM optimized for TV content by AppTek, LLC. Results across 16 deaf and hard of hearing participants show a significant preference for verbatim captions, both at original speeds and in the slowed-down version, over those edited by ChatGPT. However, a small number of participants also rated AI-edited captions as best. Despite the overall poor showing of AI, the results suggest that LLM-driven customization of captions on a per-user and per-video basis remains an important avenue for future research.