Unsupervised domain adaptation (UDA) aims to transfer knowledge across domains when there is no labeled data available in the target domain. In this way, UDA methods attempt to utilize pseudo-labeled target samples to align distribution across the source and target domains. To mitigate the negative impact of inaccurate pseudo-labeling on knowledge transfer, we propose a novel UDA method known as visual transductive learning via iterative label correction (VTL) that benefits from a novel label correction paradigm. Specifically, we learn a domain invariant and class discriminative latent feature representation for samples of both source and target domains. Simultaneously, VTL attempts to locally align the information of samples by learning the geometric structure of samples. In the new shared feature space, the novel label correction paradigm is utilized to effectively maximize the reliability of pseudo labels by correcting inaccurate pseudo-labeled target samples, which significantly improves VTL classification performance. The experimental results on several object recognition tasks verified our proposed VTL superiority in comparison with other state-of-the-arts.