Combining anti-cancer drugs has the potential to increase treatment efficacy. Because patient responses to drug combinations are highly variable, predictive biomarkers of synergy are required to identify which patients are likely to benefit from a drug combination. To aid biomarker identification, the DREAM challenge consortium has recently released data from a screen containing 85 cell lines and 167 drug combinations. The main challenge of these data is the low sample size: per drug combination, a median of 14 cell lines have been screened. We found that widely used methods in single drug response prediction, such as Elastic Net regression per drug, are not predictive in this setting. Instead, we propose to use multi-task learning: training a single model simultaneously on all drug combinations, which we show results in increased predictive performance. In contrast to other multi-task learning approaches, our approach allows for the identification of biomarkers, by using a modified random forest variable importance score, which we illustrate using artificial data and the DREAM challenge data. Notably, we find that mutations in MYO15A are associated with synergy between ALK / IGFR dual inhibitors and PI3K pathway inhibitors in triple-negative breast cancer.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.