The Bombyx mori macula-like virus (BmMLV) is a member of the genus Maculavirus, family Tymoviridae, and contains a positive-sense single-stranded RNA genome. Previously, we reported that almost all B. mori-derived cell lines have already been contaminated with BmMLV via an unknown infection route. Since B. mori-derived cell lines are used for the baculovirus expression vector system, the invasion of BmMLV will cause a serious safety risk in the production of recombinant proteins. In this study, to determine the inactivation effectiveness of BmMLV, viruses were treated with various temperatures as well as gamma and ultraviolet (UV) light radiation. After these treatments, the virus solutions were inoculated into BmMLV-free BmVF cells. At 7 days postinoculation, the amount of virus in cells was evaluated by real-time reverse transcription PCR. Regarding heat treatment, conditions under 56°C for 3 h were tolerated, whereas infectivity disappeared after treatment at 75°C for 1 h. Regarding gamma radiation treatment, viruses were relatively stable at 1 kGy; however, their infectivity was entirely eliminated at a dose of 10 kGy. With 254 nm UV-C treatment, viruses were still active at less than 120 mJ/cm(2); however, their infectivity was completely lost at greater than 140 mJ/cm(2) UV-C radiation. These results provide quantitative evidence of the potential for BmMLV inactivation under a variety of physical conditions.
We present an audio-visual model for generating food texture sounds from silent eating videos. We designed a deep network-based model that takes the visual features of the detected faces as input and outputs a magnitude spectrogram that aligns with the visual streams. Generating raw waveform samples directly from a given input visual stream is challenging; in this study, we used the Griffin-Lim algorithm for phase recovery from the predicted magnitude to generate raw waveform samples using inverse shorttime Fourier transform. Additionally, we produced waveforms from these magnitude spectrograms using an example-based synthesis procedure. To train the model, we created a dataset containing several food autonomous sensory meridian response videos. We evaluated our model on this dataset and found that the predicted sound features exhibit appropriate temporal synchronization with the visual inputs. Our subjective evaluation experiments demonstrated that the predicted sounds are considerably realistic to fool participants in a "real" or "fake" psychophysical experiment.INDEX TERMS Multi-modal deep neural network, Autonomous sensory meridian response, Eating sound generation
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.