We present a C++ library for transparent memory and compute abstraction across CPU and GPU architectures. Our library combines generic data structures like vectors, multi‐dimensional arrays, maps, graphs, and sparse grids with basic generic algorithms like arbitrary‐dimensional convolutions, copying, merging, sorting, prefix sum, reductions, neighbor search, and filtering. The memory layout of the data structures is adapted at compile time using C++ tuples with optional memory double‐mapping between host and device and the capability of using memory managed by external libraries with no data copying. We combine this transparent memory layout with generic thread‐parallel algorithms under two alternative common interfaces: a CUDA‐like kernel interface and a lambda‐function interface. We quantify the memory and compute performance and portability of our implementation using micro‐benchmarks, showing that the abstractions introduce negligible performance overhead, and we compare performance against the current state of the art in a real‐world scientific application from computational fluid mechanics.