Abstract:The revision of the standard Los Alamos opacities in the 1980-1990s by a group from the Lawrence Livermore National Laboratory (OPAL) and the Opacity Project (OP) consortium was an early example of collaborative big-data science, leading to reliable data deliverables (atomic databases, monochromatic opacities, mean opacities, and radiative accelerations) widely used since then to solve a variety of important astrophysical problems. Nowadays the precision of the OPAL and OP opacities, and even of new tables (OPLIB) by Los Alamos, is a recurrent topic in a hot debate involving stringent comparisons between theory, laboratory experiments, and solar and stellar observations in sophisticated research fields: the standard solar model (SSM), helio and asteroseismology, non-LTE 3D hydrodynamic photospheric modeling, nuclear reaction rates, solar neutrino observations, computational atomic physics, and plasma experiments. In this context, an unexpected downward revision of the solar photospheric metal abundances in 2005 spoiled a very precise agreement between the helioseismic indicators (the radius of the convection zone boundary, the sound-speed profile, and helium surface abundance) and SSM benchmarks, which could be somehow reestablished with a substantial opacity increase. Recent laboratory measurements of the iron opacity in physical conditions similar to the boundary of the solar convection zone have indeed predicted significant increases (30-400%), although new systematic improvements and comparisons of the computed tables have not yet been able to reproduce them. We give an overview of this controversy, and within the OP approach, discuss some of the theoretical shortcomings that could be impairing a more complete and accurate opacity accounting.