Assessment of answer scripts is an integral part of an examination and education system. A fair, consistent, unbiased, and correct valuation ensures the integrity of an examination system and is important for all education institutions. Since manual valuation is cumbersome and can be biased or influenced by the perception/mood of the evaluator, automatic grading of scripts has become very relevant. Automatic short answer grading (ASAG) techniques have been widely researched in the last decade and have assumed increased relevance because of online teaching and examinations during the Covid-19 pandemic. This review paper focuses on the recent works in the area of automatic answer grading and compares the techniques, methodologies employed, and the consequent results to evaluate their effectiveness. It discusses the advantages and limitations of the techniques by systematically categorizing the questions into both long/short as well as open-ended/close-ended questions and suggests a new model for improving the grading outcomes.