As massive graphs become more prevalent, there is a rapidly growing need for scalable algorithms that solve classical graph problems, such as maximum matching and minimum vertex cover, on large datasets. For massive inputs, several different computational models have been introduced, including the streaming model, the distributed communication model, and the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In each model, algorithms are analyzed in terms of resources such as space used or rounds of communication needed, in addition to the more traditional approximation ratio.In this paper, we give a single unified approach that yields better approximation algorithms for matching and vertex cover in all these models. The highlights include:• The first one pass, significantly-better-than-2-approximation for matching in random arrival streams that uses subquadratic space, namely a (1.5 + ε)-approximation streaming algorithm that uses O(n 1.5 ) space for constant ε > 0. • The first 2-round, better-than-2-approximation for matching in the MPC model that uses subquadratic space per machine, namely a (1.5 + ε)-approximation algorithm with O( √ mn + n) memory per machine for constant ε > 0.By building on our unified approach, we further develop parallel algorithms in the MPC model that give a (1 + ǫ)-approximation to matching and an O(1)-approximation to vertex cover in only O(log log n) MPC rounds and O(n/polylog(n)) memory per machine. These results settle multiple open questions posed in the recent paper of Czumaj et al. [STOC 2018].We obtain our results by a novel combination of two previously disjoint set of techniques, namely randomized composable coresets and edge degree constrained subgraphs (EDCS). We significantly extend the power of these techniques and prove several new structural results. For example, we show that an EDCS is a sparse certificate for large matchings and small vertex covers that is quite robust to sampling and composition. * sassadi@cis.upenn.edu.As massive graphs become more prevalent, there is a rapidly growing need for scalable algorithms that solve classical graph problems on large datasets. When dealing with massive data, the entire input graph is orders of magnitude larger than the amount of storage on one processor and hence any algorithm needs to explicitly address this issue. For massive inputs, several different computational models have been introduced, each focusing on certain additional resources needed to solve largescale problems. Some examples include the streaming model, the distributed communication model, and the massively parallel computation (MPC) model that is a common abstraction of MapReducestyle computation (see Section 2 for a definition of MPC). The target resources in these models are the number of rounds of communication and the local storage on each machine.Given the variety of relevant models, there has been a lot of attention on designing general algorithmic techniques that can be applicable across a wide ran...