As social media has become more widely used, fake news has become an increasingly serious problem. The representative countermeasures against fake news are fake news detection and automated fact-checking. However, these countermeasures are not sufficient because people using social media tend to ignore facts that contradict their current beliefs. Therefore, developing effective countermeasures requires understanding the nature of fake news dissemination. Previous models related to this aim have been proposed for describing and analyzing opinion dissemination among people. However, these models are not adequate because they are based on the assumptions that ignore the presence of fake. That is, they assume that people believe their friends equally without doubting and that reliability among people does not change. In this paper, we propose a model that can better describe the opinion dissemination in the presence of fake news. In our model, each person updates the reliability of and doubt about his or her friends and exchanges opinions among each other. Applying the proposed model to artificial and real-world social networks, we found three clues to analyze the nature of fake news dissemination: 1) people can less accurately perceive that fake news is fake than they can perceive that real news is real. 2) it takes much more time for people to perceive fake news to be fake than to perceive real news to be real. 3) the results of findings 1 and 2 concerning fake news are because people become skeptical about friends in the presence of fake news and therefore people do not update opinions much.