We analyze perception and memory, using mathematical models for knowledge graphs and tensors, to gain insights into the corresponding functionalities of the human mind. Our discussion is based on the concept of propositional sentences consisting of subject-predicate-object (SPO) triples for expressing elementary facts. SPO sentences are the basis for most natural languages but might also be important for explicit perception and declarative memories, as well as intra-brain communication and the ability to argue and reason. Due to its compositional nature, a set of sentences can describe a scene in great detail, avoiding the explosion in complexity with flat representations. A set of SPO sentences can be described by a knowledge graph, which can be transformed into an adjacency tensor. We introduce tensor models, where concepts have dual representations as indices and associated embeddings, two constructs we believe are essential for the understanding of implicit and explicit perception and memory in the brain. We argue that a biological realization of perception and memory imposes constraints on information processing. In particular, we propose that explicit perception and declarative memories require a complex semantic decoder, which, in a basic realization, has four layers: First, a sensory memory layer, as a buffer for sensory input, second, a memoryless representation layer for the broadcasting of information -the "blackboard", or the "canvas" of the brain-, third, an index layer representing concepts, and fourth, a working memory layer as a processing center and data buffer. We discuss the operations of the four layers and relate them to the global workspace theory. Whereas simple semantic decoding might be performed already by higher animals, the generation of triple statements, requiring working memory as part of a complex semantic decoder, is a layered sequential process likely performed only by humans. In the resulting chatterbox decoding, semantic consistency is encouraged on the representation level. Both semantic and episodic memory contribute context and thus complement sensory input with non-perceptual information: agents have memory systems for a purpose, i.e., to make better decisions! In a Bayesian brain interpretation, semantic memory defines the prior distribution for observable triple statements. We propose that -in evolution and during development-semantic memory, episodic memory, and natural language evolved as emergent properties in agents' process to gain a deeper understanding of sensory information. Our mathematical model provides some fresh perspectives on much-debated issues concerning the relationship between perception, semantic memory, and episodic memory. We present a concrete model implementation and validate some aspects of our proposed model on benchmark data where we demonstrate state-of-the-art performance.