Datacenters of today have maintained the same architecture for decades. The building block of the datacenter remains the server, which tightly couples the necessary compute resources, memory, and storage to run its tasks. However, this traditional approach suffers from under-utilization of its resources, often caused by the over-provisioning of these resources when deploying applications. Datacenter operators allocate the worst-case amount of memory required for each deployed application, which lasts for the entirety of the application's lifetime, even when not actually used. This causes servers to quickly, and falsely, run out of memory before their CPUs have been fully utilized. To address these problems, a new shift in the way datacenters are being built has been gaining more traction. Namely, memory disaggregation. Memory disaggregation can address these problems by decoupling the computational elements from the memory resources, allowing each to be provisioned and utilized separately. While the idea of memory disaggregation is not new, an increasing number of different proposals of memory disaggregation have seen the light in recent years. In this paper, we review many of these recent proposals, and study their architectures, implementations, and requirements. We also categorize them based on their features, and attempt to identify their strengths and shortcomings in an effort to highlight possible directions for future work and provide a reference for the research community.