A model of cognition suggests that the left vs right political debate is unsolvable. However the same model also suggests that a form of collective cognition (General Collective Intelligence or GCI) can allow education, health care, or other government services to be customized to the individual, so that individuals can choose services anywhere along the spectrum from socialized services if they desire, or private services if they desire, thereby removing any political stalemate where it might prevent any progress. Whatever services groups of individuals choose, GCI can significantly increase the quality of outcomes achievable through either socialized or private services today, in part through using information regarding the fitness of any services deployed, to improve the fitness of all services that might be deployed. The emerging field of General Collective Intelligence (GCI) explores how platforms might increase the general problem-solving ability (intelligence) of groups so that it is significantly higher than that of any individual. Where Collective Intelligence (CI) must find the optimal solution to a problem or group of problems, having general problem-solving ability, a GCI must also have the capacity to find the optimal problem to solve. In the case of political discussions, GCI must have the ability to re-frame political discourse from being focused on questions that have not proved resolvable, such as whether or not left leaning or right leaning political opinions are in general more “right” or “wrong”. Instead GCI must have the ability to refocus discussions, including on how to objectively determine whether a left or right bias optimizes outcomes in a specific context, and why. This paper explores the conjecture that determining whether a left leaning or right leaning cognitive bias is "optimal" (i.e. "true) based on any CI or other aggregate of individual reasoning that is not GCI, cannot reliably converge on "truth" because each individual cognitive bias leads to evaluating truth according to different reasoning types (type 1 or type 2) that might give conflicting answers to the same problem. However, through using functional modeling to create the capacity to represent all possible reasoning processes, and through using functional modeling to represent the domains in conceptual space in which each reasoning process is optimal, it is possible to systematically categorize an unlimited number of collective reasoning processes and the contexts in which execution of those reasoning processes with a right leaning or left leaning bias is optimal for the group. By designing GCI algorithms to incorporate each bias in its optimal context, a GCI can allow individuals to participate in collective reasoning despite their biases, while collective reasoning might still converge on "truth" in terms of functioning to optimize collective outcomes. And by deploying intelligent agents incorporating some subset of AGI to interact on the individual's behalf at significantly higher speed and scale, collective reasoning might gain the capacity to consider all reasoning and all "facts" available to any individual in the group, in order to converge on that truth while significantly increasing outcomes.