BACKGROUNDIn 1999, a World Health Organization (WHO) committee published histologic criteria for distinct thymoma entities (labeled as Type A, AB, B1, B2, B3 thymomas) and for the heterogeneous group of thymic carcinomas, collectively called Type C thymomas. Whether WHO‐defined histologic thymoma subtypes are of independent prognostic relevance has yet to be proved.METHODSTwo hundred thymomas from the Shanghai Chest Hospital with a mean follow‐up time of 15 years (range, 1–246 months) were studied for the relevance of WHO histologic subtype and other factors (stage, therapy, and myasthenia gravis [MG]) for survival.RESULTSIn order of frequency, 68 patients (34.0%) had Type AB, 39 (19.5%) had Type B2, 36 (18.0%) had Type C, 27 (13.5%) had Type B3, 17 (8.5%) had Type B1, and 8 (4.0%) had Type A thymoma. Five cases (2.5%) were rare thymomas not mentioned in the WHO classification. Survival data showed significant differences among the histologic subtypes (log rank test: P < 0.001). Among patients with Type A and AB thymomas, none died of tumor; of the Type B1 thymoma patients, only one (5.9%) died at 22 months. Type B2, B3, and C thymomas had a significantly worse prognosis with 5‐year survival rates of 75.0%, 70.0%, and 48.0%, respectively. Ninety‐six patients (48.0%) were in Masaoka Stage I, 26 (13.0%) were in Stage II, 65 (32.5%) were in Stage III, and 13 (6.5%) were in Stage IV. Stage was highly significant in predicting survival (log rank, test P < 0.001). The association between histologic subtype and invasive behavior (stage) was statistically significant (P < 0.001). However, histology was an independent predictive factor of survival in Stage I and II thymomas: Type B2, B3, and C thymomas had a worse prognosis than Type A, AB, and B1 thymomas (log rank test: P < 0.003). Thirty patients (15.0%) presented with MG. MG was significantly more frequent in Type B2 and B3 than in Type A, AB, and B1 thymomas (P < 0.01). On multivariate analysis, MG had no adverse effect on survival (P = 0.17). Radiation or chemotherapy improved patients' survival at 5 and 10 years in Type B2, B3, and C thymomas (log rank test: P < 0.003).CONCLUSIONSTumor stage is the most important determinant of survival in thymoma patients, but the WHO histologic subtype is an independent prognostic factor in Stage I and II thymomas, among which WHO Type A, AB, and B1 thymomas form a low‐risk group. Patients with high‐risk thymomas might profit from novel adjuvant radiochemotherapy regimens. Cancer 2002;95:420–9. © 2002 American Cancer Society.DOI 10.1002/cncr.10665
Video captioning is the task of automatically generating a textual description of the actions in a video. Although previous work (e.g. sequence-to-sequence model) has shown promising results in abstracting a coarse description of a short video, it is still very challenging to caption a video containing multiple fine-grained actions with a detailed description. This paper aims to address the challenge by proposing a novel hierarchical reinforcement learning framework for video captioning, where a highlevel Manager module learns to design sub-goals and a low-level Worker module recognizes the primitive actions to fulfill the sub-goal. With this compositional framework to reinforce video captioning at different levels, our approach significantly outperforms all the baseline methods on a newly introduced large-scale dataset for fine-grained video captioning. Furthermore, our non-ensemble model has already achieved the state-of-the-art results on the widelyused MSR-VTT dataset. Caption: A person sits on a bed and puts a laptop into a bag.The person stands up, puts the bag on one shoulder, and walks out of the room. Caption #1: A woman offers her dog some food.Caption #2: A woman is eating and sharing food with her dog. Caption #3: A woman is sharing a snack with a dog.
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. 1
Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information alone might lead to severe coverage problems. To fill in the gap, we present HybridQA 1 , a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable. We test with three different models: 1) a table-only model.2) text-only model. 3) a hybrid model that combines heterogeneous information to find the answer. The experimental results show that the EM scores obtained by two baselines are below 20%, while the hybrid model can achieve an EM over 40%. This gap suggests the necessity to aggregate heterogeneous information in HybridQA. However, the hybrid model's score is still far behind human performance. Hence, HybridQA can serve as a challenging benchmark to study question answering with heterogeneous information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.