In this paper, we describe Bridges, a new HPC resource that will integrate advanced memory technologies with a uniquely flexible, user-focused, data-centric environment to empower new research communities, bring desktop convenience to HPC, connect to campuses, and drive complex workflows. Bridges will differ from traditional HPC systems and support new communities through extensive interactivity, gateways (convenient web interfaces that hide complex functionality and ease access to HPC resources) and tools for gateway building, persistent databases and web servers, high-productivity programming languages, and virtualization. Bridges will feature three tiers of processing nodes having 128GB, 3TB, and 12TB of hardware-enabled coherent shared memory per node to support memory-intensive applications and ease of use, together with persistent database and web nodes and nodes for logins, data transfer, and system management. State-of-the-art Intel ® Xeon ® CPUs and NVIDIA Tesla GPUs will power Bridges' compute nodes. Multiple filesystems will provide optimal handling for different data needs: a high-performance, parallel, shared filesystem, node-local filesystems, and memory filesystems. Bridges' nodes and parallel filesystem will be interconnected by the Intel Omni-Path Fabric, configured in a topology developed by PSC to be optimal for the anticipated data-centric workload. Bridges will be a resource on XSEDE, the NSF Extreme Science and Engineering Discovery Environment, and will interoperate with other advanced cyberinfrastructure resources. Through a pilot project with Temple University, Bridges will develop infrastructure and processes for campus bridging, consisting of offloading jobs at periods of unusually high load to the other site and facilitating cross-site data management. Education, training, and outreach activities will raise awareness of Bridges and data-intensive science across K-12 and university communities, industry, and the general public.