Political campaigns circulate manipulative opinions in online communities to implant false beliefs and eventually win elections. Not only is this type of manipulation unfair, it also has long-lasting negative impacts on people's lives. Existing tools detect political manipulation based on a supervised classifier, which is accurate when trained with large labeled data. However, preparing this data becomes an excessive burden and must be repeated often to reflect changing manipulation tactics. We propose a practical detection system that requires moderate groundwork to achieve a sufficient level of accuracy. The proposed system groups opinions with similar properties into clusters, and then labels a few opinions from each cluster to build a classifier. It also models each opinion with features deduced from raw data with no additional processing. To validate the system, we collected over a million opinions during three nationwide campaigns in South Korea. The system reduced groundwork from 200K to nearly 200 labeling tasks, and correctly identified over 90% of manipulative opinions. The system also effectively identified transitions in manipulative tactics over time. We suggest that online communities perform periodic audits using the proposed system to highlight manipulative opinions and emerging tactics.