The betaproteobacterial degradation specialist Aromatoleum aromaticum EbN1T utilizes several plant-derived 3-phenylpropanoids coupled to denitrification. In vivo responsiveness of A. aromaticum EbN1T was studied by exposing non-adapted cells to distinct pulses (spanning 100 μM to 0.1 nM) of 3-phenylpropanoate, cinnamate, 3-(4-hydroxyphenyl)propanoate, or p-coumarate. Time-resolved, targeted transcript analyses via qRT-PCR of four selected 3-phenylpropanoid genes revealed a response threshold of 30–50 nM for p-coumarate and 1–10 nM for the other three tested 3-phenylpropanoids. At these concentrations, transmembrane effector equilibration is attained by passive diffusion rather than active uptake via the ABC transporter presumably serving the studied 3-phenylpropanoids as well as benzoate. Highly substrate-specific enzyme formation (EbA5316–21) for the shared peripheral degradation pathway putatively involves the predicted TetR-type transcriptional repressor PprR. Accordingly, relative transcript abundances of ebA5316–21 are lower in succinate- and benzoate-grown wildtype cells compared to an unmarked in-frame ΔpprR mutant. In trans complementation of pprR into the ΔpprR background restored wildtype-like transcript levels. When adapted to p-coumarate, the three genotypes had similar relative transcript abundances of ebA5316–21, despite a significantly longer lag-phase of the pprR-complemented mutant (∼100-fold higher pprR transcript level than wildtype). Notably, transcript levels of ebA5316–21 were ∼10–100-fold higher in p-coumarate- versus succinate- or benzoate-adapted cells across all three genotypes. This possibly indicates the additional involvement of a yet unknown transcriptional regulator. Furthermore, physiological, transcriptional and (aromatic) acyl-CoA ester intermediate analyses of wildtype and ΔpprR mutant grown with binary substrate mixtures suggest a mode of catabolite repression of superior order to PprR. IMPORTANCE Lignin is a ubiquitous hetero-biopolymer built from of a suite of 3-phenylpropanoid subunits. It not only accounts for more than 30% of the global plant dry material, but lignin-related compounds are also increasingly released into the environment from anthropogenic sources, i.e., by wastewater effluents from the paper and pulp industry. Hence, following biological or industrial decomplexation of lignin, vast amounts of structurally diverse 3-phenylpropanoids enter terrestrial and aquatic habitats, where they serve as substrates for microbial degradation. This raises the question what signaling systems environmental bacteria employ to detect these nutritionally attractive compounds and to adjust their catabolism accordingly. Moreover, determining in vivo response thresholds of an anaerobic degradation specialist such as A. aromaticum EbN1T for these aromatic compounds provides insights into the environmental fate of the latter, i.e., when they could escape biodegradation due to too low ambient concentrations.
The evaluation of question answering models compares ground-truth annotations with model predictions. However, as of today, this comparison is mostly lexical-based and therefore misses out on answers that have no lexical overlap but are still semantically similar, thus treating correct answers as false. This underestimation of the true performance of models hinders user acceptance in applications and complicates a fair comparison of different models. Therefore, there is a need for an evaluation metric that is based on semantics instead of pure string similarity. In this short paper, we present SAS, a cross-encoder-based metric for the estimation of semantic answer similarity, and compare it to seven existing metrics. To this end, we create an English and a German three-way annotated evaluation dataset containing pairs of answers along with human judgment of their semantic similarity, which we release along with an implementation of the SAS metric and the experiments. We find that semantic similarity metrics based on recent transformer models correlate much better with human judgment than traditional lexical similarity metrics on our two newly created datasets and one dataset from related work.
The evaluation of question answering models compares ground-truth annotations with model predictions. However, as of today, this comparison is mostly lexical-based and therefore misses out on answers that have no lexical overlap but are still semantically similar, thus treating correct answers as false. This underestimation of the true performance of models hinders user acceptance in applications and complicates a fair comparison of different models. Therefore, there is a need for an evaluation metric that is based on semantics instead of pure string similarity. In this short paper, we present SAS, a cross-encoder-based metric for the estimation of semantic answer similarity, and compare it to seven existing metrics. To this end, we create an English and a German three-way annotated evaluation dataset containing pairs of answers along with human judgment of their semantic similarity, which we release along with an implementation of the SAS metric and the experiments. We find that semantic similarity metrics based on recent transformer models correlate much better with human judgment than traditional lexical similarity metrics on our two newly created datasets and one dataset from related work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.