Identifying biomarkers for tuberculosis (TB) is an ongoing challenge in developing immunological correlates of infection outcome and protection. Biomarker discovery is also necessary for aiding design and testing of new treatments and vaccines. To effectively predict biomarkers for infection progression in any disease, including TB, large amounts of experimental data are required to reach statistical power and make accurate predictions. We took a two-pronged approach using both experimental and computational modeling to address this problem. We first collected 200 blood samples over a 2- year period from 28 non-human primates (NHP) infected with a low dose of Mycobacterium tuberculosis. We identified T cells and the cytokines that they were producing (single and multiple) from each sample along with monkey status and infection progression data. Machine learning techniques were used to interrogate the experimental NHP datasets without identifying any potential TB biomarker. In parallel, we used our extensive novel NHP datasets to build and calibrate a multi-organ computational model that combines what is occurring at the site of infection (e.g., lung) at a single granuloma scale with blood level readouts that can be tracked in monkeys and humans. We then generated a large in silico repository of in silico granulomas coupled to lymph node and blood dynamics and developed an in silico tool to scale granuloma level results to a full host scale to identify what best predicts Mycobacterium tuberculosis (Mtb) infection outcomes. The analysis of in silico blood measures identifies Mtb-specific frequencies of effector T cell phenotypes at various time points post infection as promising indicators of infection outcome. We emphasize that pairing wetlab and computational approaches holds great promise to accelerate TB biomarker discovery.
Customer reviews submitted at Internet travel portals are an important yet underexplored new resource for obtaining feedback on customer experience for the hospitality industry. These data are often voluminous and unstructured, presenting analytical challenges for traditional tools that were designed for well-structured, quantitative data. We adapt methods from natural language processing and machine learning to illustrate how the hotel industry can leverage this new data source by performing automated evaluation of the quality of writing, sentiment estimation, and topic extraction. By analyzing 5,830 reviews from 57 hotels in Moscow, Russia, we find that (i) negative reviews tend to focus on a small number of topics, whereas positive reviews tend to touch on a greater number of topics; (ii) negative sentiment inherent in a review has a larger downward impact than corresponding positive sentiment; and (iii) negative reviews contain a larger variation in sentiment on average than positive reviews. These insights can be instrumental in helping hotels achieve their strategic, financial, and operational objectives.
We study the behavior of the interbank market before, during and after the 2008 financial crisis. Leveraging recent advances in network analysis, we study two network structures, a correlation network based on publicly traded bank returns, and a physical network based on interbank lending transactions. While the two networks behave similarly pre-crisis, during the crisis the correlation network shows an increase in interconnectedness while the physical network highlights a marked decrease in interconnectedness. Moreover, these networks respond differently to monetary and macroeconomic shocks. Physical networks forecast liquidity problems while correlation networks forecast financial crises.Acknowledgements: We would like to thank for valuable discussions and comments:
Time series of graphs are increasingly prevalent in modern data and pose unique challenges to visual exploration and pattern extraction. This paper describes the development and application of matrix factorizations for exploration and time-varying community detection in time-evolving graph sequences. The matrix factorization model allows the user to home in on and display interesting, underlying structure and its evolution over time. The methods are scalable to weighted networks with a large number of time points or nodes and can accommodate sudden changes to graph topology. Our techniques are demonstrated with several dynamic graph series from both synthetic and real-world data, including citation and trade networks. These examples illustrate how users can steer the techniques and combine them with existing methods to discover and display meaningful patterns in sizable graphs over many time points.
Problem description: Measuring quality in the service industry remains a challenge. Existing methodologies are often costly and unscalable. Furthermore, understanding how elements of service quality contribute to the performance of service providers continues to be a concern in the service industry. In this paper, we address these challenges in the restaurant sector, a vital component of the service industry. Academic/practical relevance: Our work provides a scalable methodology for measuring the quality of service providers using the vast amount of text in social media. The quality metrics proposed are associated with economic outcomes for restaurants and can help predict future restaurant performance. Methodology: We use text present in online reviews on Yelp.com to identify and extract service dimensions using nonnegative matrix factorization for a large set of restaurants located in a major city in the United States. We subsequently validate these service dimensions as proxies for service quality using external data sources and a series of laboratory experiments. Finally, we use econometrics to test the relationship between these dimensions and restaurant survival as additional validation. Results: We find that our proposed service quality dimensions are scalable, match industry standards, and are correctly identified by subjects in a controlled setting. Furthermore, we show that specific service dimensions are significantly correlated with the survival of merchants, even after controlling for competition and other factors. Managerial implications: This work has implications for the strategic use of text analytics in the context of service operations, where an increasingly large text corpus is available. We discuss the benefits of this work for service providers and platforms, such as Yelp and OpenTable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.