Past research has found that people treat advice differently depending on its source.In many cases, people seem to prefer human advice to algorithms, but in others, there is a reversal, and people seem to prefer algorithmic advice. Across two studies, we examine the persuasiveness of, and judges' preferences for, advice from different sources when forecasting geopolitical events. We find that judges report domainspecific preferences, preferring human advice in the domain of politics and algorithmic advice in the domain of economics. In Study 2, participants report a preference for hybrid advice, that combines human and algorithmic sources, to either one on it's own regardless of domain. More importantly, we find that these preferences did not affect persuasiveness of advice from these different sources, regardless of domain.Judges were primarily sensitive to quantitative features pertaining to the similarity between their initial beliefs and the advice they were offered, such as the distance between them and the relative advisor confidence, when deciding whether to revise their initial beliefs in light of advice, rather than the source that generated the advice.