Dynamic slicing techniques compute program dependencies to find all statements that affect the value of a variable at a program point for a specific execution. Despite their many potential uses, applicability is limited by the fact that they typically cannot scale beyond small-sized applications. We believe that at the heart of this limitation is the use of memory references to identify data-dependencies. Particularly, working with memory references hinders distinct treatment of the code-to-be-sliced (e.g., classes the user has an interest in) from the rest of the code (including libraries and frameworks). The ability to perform a coarser-grained analysis for the code that is not under focus may provide performance gains and could become one avenue toward scalability.In this paper, we propose a novel approach that completely replaces memory reference registering and processing with a memory analysis model that works with program symbols (i.e., terms). In fact, this approach enables the alternative of not instrumenting -thus, not generating any trace-for code that is not part of the code-to-be-sliced. We report on an implementation of an abstract dynamic slicer for C#, DynAbs, and an evaluation that shows how large and relevant parts of Roslyn and Powershell -two of the largest and modern C# applications that can be found in GitHubcan be sliced for their test cases assertions in at most a few minutes. We also show how reducing the code-to-be-sliced focus can bring important speedups with marginal relative precision loss.Index terms-dynamic slicing, program slicing.