BackgroundWe previously conducted a phase I trial for advanced colorectal cancer (CRC) using five HLA-A*2402-restricted peptides, three derived from oncoantigens and two from vascular endothelial growth factor (VEGF) receptors, and confirmed safety and immunological responses. To evaluate clinical benefits of cancer vaccination treatment, we conducted a phase II trial using the same peptides in combination with oxaliplatin-based chemotherapy as a first-line therapy.MethodsThe primary objective of the study was the response rates (RR). Progression free survival (PFS), overall survival (OS), and immunological parameters were evaluated as secondary objective. The planned sample size was more than 40 patients for both HLA2402-matched and -unmatched groups. All patients received a cocktail of five peptides (3 mg each) mixed with 1.5 ml of IFA which was subcutaneously administered weekly for the first 12 weeks followed by biweekly administration. Presence or absence of the HLA-A*2402 genotype were used for classification of patients into two groups.ResultsBetween February 2009 and November 2012, ninety-six chemotherapy naïve CRC patients were enrolled under the masking of their HLA-A status. Ninety-three patients received mFOLFOX6 and three received XELOX. Bevacizumab was added in five patients. RR was 62.0% and 60.9% in the HLA-A*2402-matched and -unmatched groups, respectively (p = 0.910). The median OS was 20.7 months in the HLA-A*2402-matched group and 24.0 months in the unmatched group (log-rank, p = 0.489). In subgroup with a neutrophil/lymphocyte ratio (NLR) of < 3.0, patients in the HLA-matched group did not survive significantly longer than those in the unmatched group (log-rank, p = 0.289) but showed a delayed response.ConclusionsAlthough no significance was observed for planned statistical efficacy endpoints, a delayed response was observed in subgroup with a NLR of < 3.0. Biomarkers such as NLR might be useful for selecting patients with a better treatment outcome by the vaccination.Trial registrationTrial registration: UMIN000001791.
a b s t r a c tLarge scale 3D shape retrieval has become an important research direction in content based 3D shape retrieval. To promote this research area, two Shape Retrieval Contest (SHREC) tracks on large scale com prehensive and sketch based 3D model retrieval have been organized by us in 2014. Both tracks were based on a unified large scale benchmark that supports multimodal queries (3D models and sketches). This benchmark contains 13680 sketches and 8987 3D models, divided into 171 distinct classes. It was compiled to be a superset of existing benchmarks and presents a new challenge to retrieval methods as it comprises generic models as well as domain specific model types. Twelve and six distinct 3D shape retrieval methods have competed with each other in these two contests, respectively. To measure and compare the performance of the participating and other promising Query by Model or Query by Sketch 3D shape retrieval methods and to solicit state of the art approaches, we perform a more comprehensive comparison of twenty six (eighteen originally participating algorithms and eight additional state of the art or new) retrieval methods by evaluating them on the common benchmark. The benchmark, results, and evaluation tools are publicly available at our websites
Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. To foster this research area, two Shape Retrieval Contest (SHREC) tracks on this topic have been organized by us in 2012 and 2013 based on a small-scale and large-scale benchmarks, respectively. Six and five (nine in total) distinct sketch-based 3D shape retrieval method have competed each other in these two contests, respectively. To measure and compare the performance of the top participating and other existing promising sketch-based 3D shape retrieval methods and solicit the state-of-the-art approaches, we perform a more comprehensive comparison of fifteen best (four top participating algorithms and eleven additional state-of-the-art methods) retrieval methods by completing the evaluation of each method on both benchmarks. The benchmarks, results, and evaluation tools for the two tracks are publicly available on our websites [1,2]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.