Implementing big data storage at scale is a complex and arduous task that requires an advanced infrastructure. With the rise of public cloud computing, various big data management services can be readily leveraged. As a critical part of Twitter's "Project Partly Cloudy", the cold storage data and analytics systems are being moved to the public cloud. This paper showcases our approach in designing a scalable big data storage and analytics management framework using BigQuery in Google Cloud Platform while ensuring the security, privacy and data protection. The paper also discusses the limitations on the public cloud resources and how they can be effectively overcome when designing a big data storage and analytics solution at scale. Although the paper discusses the framework implementation in Google Cloud Platform, it can easily be applied to all major cloud providers.
With the advent of the Big Data era, it is usually computationally expensive to calculate the resource usages of a SQL query with traditional DBMS approaches. Can we estimate the cost of each query more efficiently without any computation in a SQL engine kernel? Can machine learning techniques help to estimate SQL query resource utilization? The answers are yes. We propose a SQL query cost predictor service, which employs machine learning techniques to train models from historical query request logs and rapidly forecasts the CPU and memory resource usages of online queries without any computation in a SQL engine. At Twitter, infrastructure engineers are maintaining a large-scale SQL federation system across on-premises and cloud data centers for serving ad-hoc queries. The proposed service can help to improve query scheduling by relieving the issue of imbalanced online analytical processing (OLAP) workloads in the SQL engine clusters. It can also assist in enabling preemptive scaling. Additionally, the proposed approach uses plain SQL statements for the model training and online prediction, indicating it is both hardware and software-agnostic. The method can be generalized to broader SQL systems and heterogeneous environments. The models can achieve 97.9% accuracy for CPU usage prediction and 97% accuracy for memory usage prediction.
We have witnessed a boosted demand for graph analytics at Twitter in recent years, and graph analytics has become one of the key parts of Twitter's large-scale data analytics and machine learning for driving engagement, serving the most relevant content, and promoting healthier conversations. However, infrastructure for graph analytics has historically not been an area of investment at Twitter, resulting in a long timeline and huge engineering effort for each project to deal with graphs at the Twitter scale. How do we build a unified graph analytics user experience to fulfill modern data analytics on various graph scales spanning from thousands to hundreds of billions of vertices and edges?To bring fast and scalable graph analytics capability into production, we investigate the challenges we are facing in largescale graph analytics at Twitter and propose a unified graph analytics platform for efficient, scalable, and reliable graph analytics across on-premises and cloud, to fulfill the requirements of diverse graph use cases and challenging scales. We also conduct quantitative benchmarking on Twitter's production-level graph use cases between popular graph analytics frameworks to certify our solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.