In crowd counting datasets, each person is annotated by a point, which is usually the center of the head. And the task is to estimate the total count in a crowd scene. Most of the state-of-the-art methods are based on density map estimation, which convert the sparse point annotations into a "ground truth" density map through a Gaussian kernel, and then use it as the learning target to train a density map estimator. However, such a "ground-truth" density map is imperfect due to occlusions, perspective effects, variations in object shapes, etc. On the contrary, we propose Bayesian loss, a novel loss function which constructs a density contribution probability model from the point annotations. Instead of constraining the value at every pixel in the density map, the proposed training loss adopts a more reliable supervision on the count expectation at each annotated point. Without bells and whistles, the loss function makes substantial improvements over the baseline loss on all tested datasets. Moreover, our proposed loss function equipped with a standard backbone network, without using any external detectors or multi-scale architectures, plays favourably against the state of the arts. Our method outperforms previous best approaches by a large margin on the latest and largest UCF-QNRF dataset. The source code is available at https://github.com/ZhihengCV/ Baysian-Crowd-Counting. arXiv:1908.03684v1 [cs.CV] 10 Aug 2019 pixel, our proposed training loss supervises on the count expectation at each annotated point, instead.Extensive experimental evaluations show that the proposed loss function substantially outperforms the baseline training loss on UCF-QNRF [16], ShanghaiTech [57], and UCF CC 50 [15] benchmark datasets. Moreover, our proposed loss function equipped with the standard VGG-19 network [39] as backbone, without using any external detectors or multi-scale architectures, achieves the state-ofthe-art performances on all the benchmark datasets, especially with a magnificent improvement on the UCF-QNRF dataset compared to other methods.