Macroautophagy (autophagy), a process for lysosomal degradation of organelles and long-lived proteins, has been linked to various pathologies including cancer and to the cellular response to anticancer therapies. In the human estrogen receptor positive MCF7 breast adenocarcinoma cell line, treatment with the endocrine therapeutic tamoxifen was shown previously to induce cell cycle arrest, cell death, and autophagy. To investigate specifically the role of autophagy in tamoxifen treated breast cancer cell lines, we used a siRNA approach, targeting three different autophagy genes, Atg5, Beclin-1, and Atg7. We found that knockdown of autophagy, in combination with tamoxifen in MCF7 cells, results in decreased cell viability concomitant with increased mitochondrial-mediated apoptosis. The combination of autophagy knockdown and tamoxifen treatment similarly resulted in reduced cell viability in the breast cancer cell lines, estrogen receptor positive T-47D and tamoxifen-resistant MCF7-HER2. Together, these results indicate that autophagy has a primary pro-survival role following tamoxifen treatment, and suggest that autophagy knockdown may be useful in a combination therapy setting to sensitize breast cancer cells, including tamoxifen-resistant breast cancer cells, to tamoxifen therapy.
Given questions regarding some prototypical situation -such as Name something that people usually do before they leave the house for work? -a human can easily answer them via acquired experiences. There can be multiple right answers for such questions, with some more common for a situation than others. This paper introduces a new question answering dataset for training and evaluating common sense reasoning capabilities of artificial intelligence systems in such prototypical situations. The training set is gathered from an existing set of questions played in a longrunning international game show -FAMILY-FEUD. The hidden evaluation set is created by gathering answers for each question from 100 crowd-workers. We also propose a generative evaluation task where a model has to output a ranked list of answers, ideally covering all prototypical answers for a question. After presenting multiple competitive baseline models, we find that human performance still exceeds model scores on all evaluation metrics with a meaningful gap, supporting the challenging nature of the task.
Germ line BRCA mutations (gBRCAm) are diagnosed in approximately 5% of unselected breast cancer patients. Olaparib is a new treatment option for patients with a gBRCAm who have metastatic HER2-negative breast cancer. Areas covered: Olaparib is an oral poly (ADP-ribose) polymerase inhibitor that has been shown in phase I-III clinical trials to have single-agent efficacy in breast cancer patients with gBRCAm. The recent phase III OlympiAD study demonstrated a statistically significant progression-free survival benefit compared with the chemotherapy control arm, although an overall survival benefit has not been demonstrated. The most common adverse events seen with olaparib include nausea, anemia, and vomiting. The most common grade 3 adverse events are anemia and neutropenia. Expert commentary: The US FDA-approved olaparib tablets in January 2018 for the treatment of patients with a gBRCAm and metastatic HER2-negative breast cancer. This is a well-tolerated and effective treatment option for this patient population, particularly in patients with triple-negative breast cancer in which chemotherapy is the only alternative. More data are needed to understand the role of olaparib in combination with endocrine therapy, other targeted agents, and chemotherapy, as well as sequentially with platinum chemotherapy in the metastatic setting.
Given questions regarding some prototypical situation -such as Name something that people usually do before they leave the house for work? -a human can easily answer them via acquired experiences. There can be multiple right answers for such questions with some more common for a situation than others.This paper introduces a new question answering dataset for training and evaluating common-sense reasoning capabilities of artificial intelligence systems in such prototypical situations. The training set is gathered from an existing set of questions played in a long-running international trivia game show -FAMILY-FEUD. The hidden evaluation set is created by gathering answers for each question from 100 crowd-workers. We also propose an open-domain task where a model has to output a ranked list of answers, ideally covering all prototypical answers for a question. On evaluating our dataset with various competitive state-of-the-art models, we find there is a significant gap between the best model and human performance on a number of evaluation metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.