Protein derived from purple wheat bran was hydrolyzed sequentially using alcalase proteases for the production of antioxidant peptides. Purple wheat bran protein (PWBP) hydrolysates were fractionated using size-exclusion (G-25) and ion-exchange chromatography methods to identify the structure of antioxidant peptides. The free radical scavenging activity of peptides purified from PWBP hydrolysates was evaluated using superoxide anion radical-scavenging activity and determination assays of Trolox equivalent antioxidant capacity (TEAC). Results demonstrated that purple wheat bran peptide F4-4 exhibited the highest antioxidant activity among other hydrolysates. F4-4 was further identified as Cys-Gly-Phe-Pro-Gly-His-Cys, Gln-Ala-Cys, Arg-Asn-Phe, Ser-Ser-Cys, and Trp-Phe by high performance liquid chromatography (HPLC) spectrometer coupled with Orbitrap Elite™ mass spectrometer (LC–MS/MS). Antioxidant peptides 2 and 4 showed improved stability when the temperature was lower than 80 °C. These peptides also demonstrated good digestive stability in vitro system by simulating gastrointestinal digestion.
Generative large language models (LLMs), e.g., ChatGPT, have demonstrated remarkable proficiency across several NLP tasks such as machine translation, question answering, text summarization, and natural language understanding. Recent research has shown that utilizing ChatGPT for assessing the quality of machine translation (MT) achieves state-of-the-art performance at the system level but performs poorly at the segment level. To further improve the performance of LLMs on MT quality assessment, we conducted an investigation into several prompting methods. Our results indicate that by combining Chain-of-Thoughts and Error Analysis, a new prompting method called Error Analysis Prompting, LLMs like ChatGPT can \textit{generate human-like MT evaluations at both the system and segment level}. Additionally, we discovered some limitations of ChatGPT as an MT evaluator, such as unstable scoring and biases when provided with multiple translations in a single query. Our findings aim to provide a preliminary experience for appropriately evaluating translation quality on ChatGPT while offering a variety of tricks in designing prompts for in-context learning. We anticipate that this report will shed new light on advancing the field of translation evaluation with LLMs by enhancing both the accuracy and reliability of metrics. The project can be found at https://github.com/Coldmist-Lu/ErrorAnalysis_Prompt.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.