Recently, politicians and media companies identified an increasing number of offensive statements directed against foreigners and refugees in Europe. In Germany, for example, the political group "Pegida" drew international attention by frequently publishing offensive content concerning the religion of Islam. As a consequence, the German government and the social network Facebook cooperate to address this problem by creating a task force to manually detect offensive statements towards refugees and foreigners. In this work, we propose an approach to automatically detect such statements aiding personnel in this labor-intensive task. In contrast to existing work, we assess severity values to offensive statements and identify the referenced targets. This way, we are able to selectively detect hostility towards foreigners. To evaluate our approach, we develop a dataset containing offensive statements including their target. As a result, a substantial amount of offensive statements and a moderate amount of the referenced victims was detected correctly.