2016
DOI: 10.1038/nrd.2016.88
|View full text |Cite
|
Sign up to set email alerts
|

Failed trials for central nervous system disorders do not necessarily invalidate preclinical models and drug targets

Abstract: A recent article identified five key technical determinants that make substantial contributions to the outcome of drug R&D projects (Lessons learned from the fate of AstraZeneca's drug pipeline: a five-dimensional framework. Nat. Rev. Drug Discov. 13, 419-431 (2014)) 1 . Careful consideration of such determinants might be particularly valuable in the fields of neurology and psychiatry, in which successful drug development has declined precipitously over the past decade. This decline has largely been fuelled … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(43 citation statements)
references
References 10 publications
0
41
0
2
Order By: Relevance
“…Therefore, failed clinical trials do not necessarily invalidate animal models but generate important questions about the translational process. Obviously, other issues such as the lack of data robustness, data generalizability, and target engagement when designing and evaluating preclinical studies should also be carefully taken into consideration (Bespalov et al, ). However, the academic community, as well as the industry, should take advantage of preclinical knowledge and thoroughly evaluate and interpret preclinical experiments before implementing clinical trials (Kokras & Dalla, ).…”
Section: Overall Discussionmentioning
confidence: 99%
“…Therefore, failed clinical trials do not necessarily invalidate animal models but generate important questions about the translational process. Obviously, other issues such as the lack of data robustness, data generalizability, and target engagement when designing and evaluating preclinical studies should also be carefully taken into consideration (Bespalov et al, ). However, the academic community, as well as the industry, should take advantage of preclinical knowledge and thoroughly evaluate and interpret preclinical experiments before implementing clinical trials (Kokras & Dalla, ).…”
Section: Overall Discussionmentioning
confidence: 99%
“…Considering the number of animals used in subsequent attempts to replicate findings from other laboratories, however, due care and optimal experimental design is crucial not only to reduce animal use, but to improve the replication of results. Lack of replicability associated with biased publication of positive data is not only a serious issue undermining the value and credibility of industrial and academic preclinical research (Bespalov et al, 2016;Frye et al, 2015;Peers et al, 2012), but also represents a weakness in behavioural modelling in animals as a whole (Jarvis and Williams, 2016;Landis et al, 2012). The lack of reproducibility and robustness has led to the creation of initiatives by scientific publishing companies [ARRIVE, (McGrath et al, 2010;McNutt, 2014)], and endorsed by scientific associations such as the European College of Neuropsychopharmacology, academic and industrial consortia (http:// addconsortium.org/) as well as reviewers (http://f1000research.…”
Section: Reproducibility Of Preclinical Resultsmentioning
confidence: 99%
“…Bespalov and colleagues have recently focused on factors which may underlie apparent lack of validity of animal models (Bespalov et al, 2016); some of which are reiterated in this review. These factors include the robustness and generalizability of preclinical data by which it is sometimes difficult to replicate preclinical studies, even from within the same laboratory (Lindner, 2007), let alone across different laboratories [e.g., (Crabbe et al, 1999;Scott et al, 2008;Wahlsten et al, 2003)].…”
Section: Introductionmentioning
confidence: 97%
“…Moreover, experienced staff members using the Kinoscope can streamline and audit the training of new members, by making use primarily of the visual maps, thus improving the consistency and reproducibility of scoring by novice researchers. Recently several concerns have been raised with regards to the validity of experimental data (Steckler, 2015; Bespalov et al, 2016). Many factors should be taken into account in improving the quality of experimental studies (Kilkenny et al, 2009; McNutt, 2014; Macleod et al, 2015) and perhaps another overlooked factor is the quality of manual scoring of behavioral experiments, which in turn may result in poor inter-rater agreement and inevitably low reproducibility.…”
Section: Resultsmentioning
confidence: 99%