Abstract. Validation is one of the most important stages of a model's development. By comparing outputs to observations, we can estimate how well the model is
able to simulate reality, which is the ultimate aim of many models. During development, validation may be iterated upon to improve the model
simulation and compare it to similar existing models or perhaps previous versions of the same configuration. As models become more complex, data
storage requirements increase and analyses improve, scientific communities must be able to develop standardised validation workflows for efficient
and accurate analyses with an ultimate goal of a complete, automated validation. We describe how the Coastal Ocean Assessment Toolbox (COAsT) Python package has been used to develop a standardised and partially automated validation system. This is discussed
alongside five principles which are fundamental for our system: system scaleability, independence from data source, reproducible workflows, expandable
code base and objective scoring. We also describe the current version of our own validation workflow and discuss how it adheres to the above
principles. COAsT provides a set of standardised oceanographic data objects ideal for representing both modelled and observed data. We use the package
to compare two model configurations of the Northwest European Shelf to observations from tide gauge and profiles.