Liquid hot water (LHW) and alkaline pretreatments of giant reed biomass were compared in terms of digestibility, methane production, and cost-benefit efficiency for electricity generation via anaerobic digestion with a combined heat and power system. Compared to LHW pretreatment, alkaline pretreatment retained more of the dry matter in giant reed biomass solids due to less severe conditions. Under their optimal conditions, LHW pretreatment (190°C, 15min) and alkaline pretreatment (20g/L of NaOH, 24h) improved glucose yield from giant reed by more than 2-fold, while only the alkaline pretreatment significantly (p<0.05) increased cumulative methane yield (by 63%) over that of untreated biomass (217L/kgVS). LHW pretreatment obtained negative net electrical energy production due to high energy input. Alkaline pretreatment achieved 27% higher net electrical energy production than that of non-pretreatment (3859kJ/kg initial total solids), but alkaline liquor reuse is needed for improved net benefit.
The construction of an effective good speech recognition system typically requires large amounts of transcribed data, which is expensive to collect. To overcome this problem, many unsupervised pretraining methods have been proposed. Among these methods, Masked Predictive Coding achieved significant improvements on various speech recognition datasets with BERT-like Masked Reconstruction loss and transformer backbone. However, many aspects of MPC have yet to be fully investigated. In this paper, we conduct a further study on MPC and focus on three important aspects: the effect of pretraining data speaking style, its extension on streaming model, and strategies for better transferring learned knowledge from pretraining stage to downstream tasks. The experimental results demonstrated that pretraining data with a matching speaking style is more useful on downstream recognition tasks. A unified training objective with APC and MPC provided an 8.46% relative error reduction on the streaming model trained on HKUST. Additionally, the combination of target data adaption and layerwise discriminative training facilitated the knowledge transfer of MPC, which realized 3.99% relative error reduction on AISHELL over a strong baseline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.