Research into autonomous vehicles is making progress. While implementation is progressing through machine learning and efficient sensor technology, one key challenge remains dealing with moral disputes. In general, traffic requires for moral decisions that might even decide on the life or death of participants. While people make intuitive decisions in accidents, a decision of an autonomous vehicle is made already at the programming stage. Thus, a concrete handling for implementation is needed. Due to a lack of legislation, this is still missing and prevents car manufacturers from a practical solution. The paper at hand addresses this problem by presenting a consensus mechanism, combining moral convictions, legislation, and programming guidelines. Based on a study of dilemma situations, moral principles of the 'correct action' of autonomous vehicles are derived. Of four principles, we confirm one, reject two, and propose one for further research investigation to form a basis for jurisdictions.