Solid-state drives (SSDs) faced an astonishing development in the last few years, becoming the cornerstone to new paradigms and markets of the information technology, such as cloud computing and big data centers. So far, the SSD design approach has focused on the optimization of the Flash translation layer, the firmware devoted to fulfill the compatibility with traditional hard-disk drives. For hyperscaled SSDs this strategy is no longer valid since their performance and reliability are strictly linked to that of the NAND Flash memories that constitute the storage medium, in particular when the multilevel cell paradigm is considered. For this reason, the design flow must follow a bottom-up approach that, starting from an accurate knowledge of the time and use dependent reliability of the NAND Flash memories, selects the most appropriate error correction strategy to extend the SSD lifetime while reducing its performance degradation. Then, the design flow moves to that of the SSD controller and of the interface toward the host where the application is running. This paper will thoroughly discuss this bottom-up approach, and finally, it will show how it is possible to leverage new approaches, such as the software-defined storage system that, by exploiting a hardware/software codesign of the SSD controller architecture and of the host application, will be able to revolutionize the traditional computer/memory interaction