An application's performance regressions can be detected by both application or microbenchmarks. While application benchmarks stress the system under test by sending synthetic but realistic requests which, e.g., simulate real user traffic, microbenchmarks evaluate the performance on a subroutine level by calling the function under test repeatedly.In this paper, we use a testbed microservice application which includes three performance issues to study the detection capabilities of both approaches. In extensive benchmarking experiments, we increase the severity of each performance issue stepwise, run both an application benchmark and the microbenchmark suite, and check at which point each benchmark detects the performance issue. Our results show that microbenchmarks detect all three issues earlier, some even at the lowest severity level. Application benchmarks, however, raised false positive alarms, wrongly detected performance improvements, and detected the performance issues later.