Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system. NIST intends to develop methods for increasing assurance, GOVERNANCE and practice improvements for identifying, understanding, measuring, managing, and reducing bias. To reach this goal, techniques are needed that are flexible, can be applied across contexts regardless of industry, and are easily communicated to different stakeholder groups. To contribute to the growth of this burgeoning topic area, NIST will continue its work in measuring and evaluating computational biases, and seeks to create a hub for evaluating socio-technical factors. This will include development of formal guidance and standards, supporting standards development activities such as workshops and public comment periods for draft documents, and ongoing discussion of these topics with the stakeholder community.