Movie and TV subtitles are frequently employed in natural language processing (NLP)applications, but there are limited Japanese-Chinese bilingual corpora accessible as a dataset to trainneural machine translation (NMT) models. In our previous study, we effectively constructed a corpusof a considerable size containing bilingual text data in both Japanese and Chinese by collectingsubtitle text data from websites that host movies and television series. The unsatisfactory translationperformance of the initial corpus, Web-Crawled Corpus of Japanese and Chinese (WCC-JC 1.0), waspredominantly caused by the limited number of sentence pairs. To address this shortcoming, wethoroughly analyzed the issues associated with the construction of WCC-JC 1.0 and constructed theWCC-JC 2.0 corpus by first collecting subtitle data from movie and TV series websites. Then, wemanually aligned a large number of high-quality sentence pairs. Our efforts resulted in a new corpusthat includes about 1.4 million sentence pairs, an 87% increase compared with WCC-JC 1.0. As aresult, WCC-JC 2.0 is now among the largest publicly available Japanese-Chinese bilingual corporain the world. To assess the performance of WCC-JC 2.0, we calculated the BLEU scores relative toother comparative corpora and performed manual evaluations of the translation results generated bytranslation models trained on WCC-JC 2.0. We provide WCC-JC 2.0 as a free download for researchpurposes only.