This paper presents a novel algorithm of Multiagent Reinforcement Learning called State Elimination in Accelerated Multiagent Reinforcement Learning (SEA-MRL), that successfully produces faster learning without incorporating internal knowledge or human intervention such as reward shaping, transfer learning, parameter tuning, and even heuristics, into the learning system. Since the learning speed is determined among others by the size of the state space where the larger the state space the slower learning might become, reducing the state space can lead to faster convergence. SEA-MRL distinguishes insignificant states of the state space from the significant ones and then eliminating them in early learning episodes, which aggressively reduces the scale of the state space in the following learning episodes. Applying SEA-MRL in gridworld multi robot navigation shows 1.62 times faster in achieving learning convergence. This algorithm is generally applicable for other multiagent task challenges or general multiagent learning with large scale state space, and perfectly applicable with no adjustments for single agent learning situation.