Purpose -The purpose of this paper is to propose a conceptual model to investigate the determinants of information retweeting in microblogging based on Heuristic-Systematic Model. Design/methodology/approach -Microblogging data about emergency events from Sina microblogging (http://weibo.com) are collected and analyzed with text mining technology. The proposed hypotheses are tested with logistic and multiple linear regressions. Findings -The results show that source trustworthiness, source expertise, source attractiveness, and the number of multimedia have significant effects on the information retweeting. In addition, source expertise moderates the effects of user trustworthiness and content objectivity on the information retweeting in microblogging. Practical implications -This study provides an in-depth understanding of what makes information about emergency events in microblogging diffuse so rapidly. Based on these findings the emergency management organizations in China can apply the microblogging to spread useful information, and these findings also provide practical implications for microblogging system designers. Originality/value -The primary value of this paper lies in providing a better understanding of information retweeting in microblogging based on Heuristic-Systematic Model. Organizations that would like to adopt the microblogging platform in emergency situations to improve the ability of emergency response can benefit from the findings of this study.
The Smith-Waterman (SW) algorithm based on dynamic programming is a well-known classical method for high precision sequence matching and has become the gold standard to evaluate sequence alignment software. In this paper, we propose fine-grained parallelized SW algorithms using affine gap penalty and implement a parallel computing structures to accelerating the SW with backtracking on FPGA platform. We analysis the dynamic parallel computing features of anti-diagonal elements and storage expansion problem resulting from backtracking stage, and propose a series of optimization strategies to eliminate data dependency, reduce storage requirements, and overlap memory access latency. Our implementation is capable of supporting multi-type, large-scale biological sequence alignment applications. We obtain a speedup between 3.6 and 25.2 over the typical SW algorithm running on a general-purpose computer configured with an Intel Core i5 3.2 GHz CPU. Moreover, our work is superior to other FPGA implementations in both array size and clock frequency, and the experiment results show that it can get a performance closed to that of the latest GPU implementation, but the power consumption is only about 26% of that of the GPU platforms.
The factor-augmented vector autoregressive (FAVAR) model, first proposed by Bernanke, Bovin, and Eliasz (2005, QJE), is now widely used in macroeconomics and finance. In this model, observable and unobservable factors jointly follow a vector autoregressive process, which further drives the comovement of a large number of observable variables. We study the identification restrictions in the presence of observable factors. We propose a likelihood-based two-step method to estimate the FAVAR model that explicitly accounts for factors being partially observed. We then provide an inferential theory for the estimated factors, factor loadings and the dynamic parameters in the VAR process. We show how and why the limiting distributions are different from the existing results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.