PurposeUlinastatin, a serine protease inhibitor, inhibits several pro-inflammatory proteases and decreases inflammatory cytokine levels and mortality in experimental sepsis. We studied the effect of ulinastatin on 28-day all-cause mortality in a double-blind trial in patients with severe sepsis in seven Indian hospitals.MethodsPatients with sepsis were randomized within 48 h of onset of one or more organ failures to receive intravenous administration of ulinastatin (200,000 IU) or placebo 12 hourly for 5 days.ResultsOf 122 randomized subjects, 114 completed the study (55 receiving ulinastatin, 59 receiving placebo). At baseline, the mean APACHE II score was 13.4 (SD = 4.4), 48 (42 %) patients were receiving mechanical ventilation, 58 (51 %) were on vasopressors, and 35 % had multiple organ failure. In the modified intention-to-treat analysis (patients receiving six or more doses of study drugs), 28-day all-cause mortality was 7.3 % with ulinastatin (4 deaths) versus 20.3 % (12 deaths) with placebo (p = 0.045). On multivariate analysis too, treatment with ulinastatin (odds ratio 0.26, 95 % CI 0.07–0.95; p = 0.042) independently decreased 28-day all-cause mortality. However, the mortality difference did not reach statistical significance in the intention-to-treat analysis [10.2 % (6/59 deaths) with ulinastatin versus 20.6 % (13/63 deaths) in the placebo group; p = 0.11]. The ulinastatin group had lower incidence of new-onset organ failure (10 vs. 26 patients, p = 0.003), more ventilator-free days (mean ± SD 19.4 ± 10.6 days vs. 10.2 ± 12.5 days, p = 0.019), and shorter hospital stay (11.8 ± 7.1 days vs. 24.2 ± 7.2 days, p < 0.001).ConclusionsIn this pilot study, intravenous administration of ulinastatin reduced mortality in patients with severe sepsis in the modified intention-to-treat analysis, but not in the intention-to-treat analysis.Electronic supplementary materialThe online version of this article (doi:10.1007/s00134-014-3278-8) contains supplementary material, which is available to authorized users.
For frequency estimation in noisy speech or music signals, time domain methods based on signal processing techniques such as autocorrelation or average magnitude difference, often do not perform well. As deep neural networks (DNNs) have become feasible, some researchers have attempted with some success to improve the performance of signal processing based methods by learning on autocorrelation, Fourier transform or constant-Q filter bank based representations. In our approach, blocks of signal samples are input directly to a neural network to perform end to end learning. The emergence of subharmonic structure in the posterior vector of the output layer, along with analysis of the filter-like structures emerging in the DNN shows strong correlations with some signal processing based approaches. These NNs appear to learn a nonlinearlyspaced frequency representation in the first layer followed by comb-like filters. We find that learning representations from raw time-domain signals can achieve performance on par with the current state of the art algorithms for frequency estimation in noisy and polyphonic settings. The emergence of subharmonic structure in the posterior vector suggests that existing post-processing techniques such as harmonic product spectra and salience mapping may further improve the performance.
We propose spoken sentence embeddings which capture both acoustic and linguistic content. While existing works operate at the character, phoneme, or word level, our method learns long-term dependencies by modeling speech at the sentence level. Formulated as an audio-linguistic multitask learning problem, our encoder-decoder model simultaneously reconstructs acoustic and natural language features from audio. Our results show that spoken sentence embeddings outperform phoneme and word-level baselines on speech recognition and emotion recognition tasks. Ablation studies show that our embeddings can better model high-level acoustic concepts while retaining linguistic content. Overall, our work illustrates the viability of generic, multi-modal sentence embeddings for spoken language understanding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.