Humans have occasionally exhibited cooperative behaviour, deviating from individual rationality, in experiments such as the public goods game and prisoner’s dilemma. Despite numerous experiments, the alignment between human cooperative behaviour and game theory predictions remains inconsistent. Although comprehending human cooperation through experimentation is pivotal, large-scale experiments with human subjects pose challenges, resulting in insufficient data on cooperative behaviour across diverse populations. Here, we present a new approach using Deep Q-Learning-based agents—a type of artificial intelligence—in a public goods game, revealing an intriguing trend, specifically that agents increasingly opt for cooperation as the number of participants rises. This viewpoint challenges prevailing research paradigms by underscoring the importance of group size—an aspect hitherto underexplored. Additionally, this approach mitigates experimental costs, enhancing scalability and eliminating the influence of the details of experimental design. Anticipated outcomes include demonstrating sustained cooperation even in scenarios dominated by non-cooperation in established game theory, thereby narrowing the gap between theoretical predictions and experimental observations. These findings hold potential applications, such as attracting substantial investments for large-scale public projects. Additionally, implementing a mechanism in social networking systems that integrates Deep Q-Learning agents into communities may mitigate fragmentation by promoting cooperative behaviour amongst diverse participants.