Assessing the risks of software vulnerabilities is a key process of software development and security management. This assessment requires to consider multiple factors (technical features, operational environment, involved assets, status of the vulnerability lifecycle, etc.) and may depend from the assessor's knowledge and skills. In this work, we tackle with an important part of this problem by measuring the accuracy of technical vulnerability assessments by assessors with different level and type of knowledge. We report an experiment to compare how accurately students with different technical education and security professionals are able to assess the severity of software vulnerabilities with the Common Vulnerability Scoring System (v3) industry methodology. Our results could be useful for increasing awareness about the intrinsic subtleties of vulnerability risk assessment and possibly better compliance with regulations. With respect to academic education, professional training and human resources selections our work suggests that measuring the effects of knowledge and expertise on the accuracy of software security assessments is feasible albeit not easy.