Profiling is an important tool in the software developer’s box, used to identify hot methods where most computational resources are used, to focus efforts at improving efficiency. Profilers are also important in the context of Genetic improvement (GI) of software. GI applies search-based optimisation to existing software with many examples of success in a variety of contexts. GI generates variants of the original program, testing each for functionality and properties such as run time or memory footprint, and profiling can be used to target the code variations to increase the search efficiency. We report on an experimental study comparing two profilers included with different versions of the Java Development Kit (JDK), HPROF (JDK 8) and Java Flight Recorder (JFR) (JDK 8, 9, and 17), within the GI toolbox Gin on six open-source applications, for both run time and memory use. We find that a core set of methods are labelled hot in most runs, with a long tail appearing rarely. We suggest five repeats enough to overcome this noise. Perhaps unsurprisingly, changing the profiler and JDK dramatically change the hot methods identified, so profiling must be rerun for new JDKs. We also show that using profiling for test case subset selection is unwise, often missing relevant members of the test suite. Similar general patterns are seen for memory profiling as for run time but the identified hot methods are often quite different.