State-of-the-art Large Language Models have recently exhibited extraordinary linguistic abilities which have surprisingly extended to reasoning. However, responses that are unreliable, false, or invented are still a frequent issue. It has been argued that scaling up strategies, as in increasing model size or hardware power, might not be enough to resolve the issue. Recent research has implemented Type 2 strategies (such as Chain-of-Thought and Tree-of-Thought), as strategies that mimic Type 2 reasoning, from Dual Process Theory, to interact with Large Language Models for improved results. The current paper reviews these strategies in light of the Predicting and Reflecting Framework for understanding Dual Process Theory and suggests what Psychology, drawing from research in executive functions, thinking disposition and creativity, can further contribute to possible implementations that address hallucination and reliability issues.