“…One should contrast this with the recent connectomics system of Roncal et al [58] that uses a cluster of 100 AMD Opteron cores, 1 terabyte of memory, and 27 GeForce GTX Titan cards to process a terabyte in 4.5 weeks, and the stateof-the-art distributed MapReduce based system of Plaza and Berg [56] that uses 512 cores and 2.9 terabytes of memory to process a terabyte of data in 140 hours (not including skeletonization). Importantly, the speed of our pipeline does not come at the expense of accuracy, which is on par or better than existing systems in the literature [29,56,58] (using the accepted variation of information (VI) measure [44]). Our high-level pipeline design builds on prior work [29,42,51,52,55,56].…”