Since 2015, concentration–QTc (C–QTc) analysis has been used to exclude the possibility that a drug has a concerning effect on the QTc interval. This has enabled the replacement of the designated thorough QT (TQT) study with serial electrocardiograms (ECGs) in routine clinical pharmacology studies, such as the first‐in‐human (FIH) study. The E14 revision has led to an increased proportion of FIH studies with the added objective of QT evaluation, with the intention of replacing the TQT study. With the more recent revision of the S7B/E14 Q&A document in February 2022, nonclinical assays/studies can be brought into the process of regulatory decisions at the time of marketing application. If the hERG (human ether‐a‐go‐go‐related gene) and the non‐rodent in vivo study are conducted according to the described best practices and are negative, the previous requirement that a QTc effect of >10 milliseconds must be excluded in healthy subjects at plasma concentrations 2‐fold above what can be seen in patients can be reduced to covering the concentrations seen in patients. For drugs that cannot be safely given in high doses to healthy subjects, ECG evaluation is often performed at the therapeutic dose in patients. If a QTc effect of >10 milliseconds can be excluded, an argument can be made that the drug should be considered as having a low likelihood of proarrhythmic effects due to delayedrepolarization, if supported by negative best practices hERG and in vivo studies. In this article, we describe what clinicians involved in early clinical development need to understand in terms of the hERG and in vivo studies to determine whether these meet best practices and therefore can be used in an integrated clinical/nonclinical QT/QTc risk assessment.