Achieving high performance in task-parallel runtime systems, especially with high degrees of parallelism and ne-grained tasks, requires tuning a large variety of behavioral parameters according to program characteristics. In the current state of the art, this tuning is generally performed in one of two ways: either by a group of experts who derive a single setup which achieves good -but not optimal -performance across a wide variety of use cases, or by monitoring a system's behavior at runtime and responding to it. e former approach invariably fails to achieve optimal performance for programs with highly distinct execution pa erns, while the la er induces some overhead and cannot a ect parameters which need to be xed at compile time.In order to mitigate these drawbacks, we propose a set of novel static compiler analyses speci cally designed to determine program features which a ect the optimal se ings for a task-parallel execution environment. ese features include the parallel structure of task spawning, the granularity of individual tasks, and an estimate of the stack size required per task. Based on the result of these analyses, various runtime system parameters are then tuned at compile time.We have implemented this approach in the Insieme compiler and runtime system, and evaluated its e ectiveness on a set of 12 task parallel benchmarks running with 1 to 64 hardware threads. Across this entire space of use cases, our implementation achieves a geometric mean performance improvement of 39%.