The evolutionarily divergent class of kinetoplastid organisms has a set of unconventional kinetochore proteins that drive chromosome segregation, but it is unclear which components interact with spindle microtubules. Llauró et al. now identify KKT4 as the first microtubule-binding kinetochore protein in Trypanosoma brucei, a major human pathogenic parasite.
This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to define a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the specific language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.
The quality of recorded music is often highly disputed. To gain insight into the dimensions of quality perception, subjective and objective evaluation of musical program material, extracted from commercial CDs, was undertaken. It was observed that perception of audio quality and liking of the music can be affected by separate factors. Familiarity with stimuli affected like ratings while quality ratings were most associated with signal features related to perceived loudness and dynamic range compression. The effect of listener expertise was small. Additionally, the sonic attributes describing quality ratings were gathered and indicate a diverse lexicon relating to timbre, space, defects, and other concepts. The results also suggest that while the perceived quality of popular music may have decreased over recent years, like ratings were unaffected. INTRODUCTIONIn the context of recorded sound there is great debate over which parameters influence the perception of quality or how quality should be defined. In the context of product development, sound quality has been defined as the "result of an assessment of the perceived auditory nature of a sound with respect to its desired nature" [1]. In order to assess the audio quality of a recording, the requirements for quality must be identified as well as the inherent characteristics of the audio signal. These characteristics must then be measured and used to estimate quality, which is then optimized subject to various constraints, e.g., the available budget, human resources, and projected time-to-market. This paper details the findings of a study into the perception of quality in commercial music productions, attempting to ascertain which objective and subjective parameters are involved as well as the relative importance of these parameters. ASSESSMENT OF QUALITYA variety of theories and methodologies exist for the assessment of quality in many different fields. A number of these can be applied to reproduced sound. In this context, quality judgments can be considered to be based on technical properties of the signal, such as bandwidth or distortion, or based on hedonic preference, which might be influenced by personal aspects of familiarity.International standards exist regarding the measurement of audio quality based on determining the level of degradation from a reference [2]. These procedures are formulated under the assumption that a reference item exists, which can be used as an example of greatest quality, and test items are then compared against this reference. This usually applies to systems where the reference is formed from the original version of the program material and the test samples under evaluation are copies that have undergone some form of processing. The evaluation of systems such as audio codecs [3] is a good example of this type of approach. In these circumstances, it is not strictly the inherent quality of the program material that is being measured but rather the perceived degradation in quality of the signal, after being subject to destructive pr...
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.
Any serious steganography system should make use of coding. Here, we investigate the performance of our prior linguistic steganographic method for tweets, combined with perfect coding. We propose distortion measures for linguistic steganography, the first of their kind, and investigate the best embedding strategy for the steganographer. These distortion measures are tested with fully automatically generated stego objects, as well as stego tweets filtered by a human operator. We also observed a square root law of capacity in this linguistic stegosystem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.