News podcasts are a popular medium to stay informed and dive deep into news topics. Today, most podcasts are handcrafted by professionals. In this work, we advance the state-of-the-art in automatically generated podcasts, making use of recent advances in natural language processing and text-to-speech technology. We present NewsPod, an automatically generated, interactive news podcast. The podcast is divided into segments, each centered on a news event, with each segment structured as a Question and Answer conversation, whose goal is to engage the listener. A key aspect of the design is the use of distinct voices for each role (questioner, responder), to better simulate a conversation. Another novel aspect of NewsPod allows listeners to interact with the podcast by asking their own questions and receiving automatically generated answers. We validate the soundness of this system design through two usability studies, focused on evaluating the narrative style and interactions with the podcast, respectively. We find that NewsPod is preferred over a baseline by participants, with 80% claiming they would use the system in the future.
Agile software teams are expected to follow a number of specific Team Practices (TPs) during each iteration, such as estimating the effort ("points") required to complete user stories and coordinating the management of the codebase with the delivery of features. For software engineering instructors trying to teach such TPs to student teams, manually auditing teams if teams are following the TPs and improving over time is tedious, time-consuming and error-prone. It is even more difficult when those TPs involve two or more tools. For example, starting work on a feature in a project-management tool such as Pivotal Tracker should usually be followed relatively quickly by the creation of a feature branch on GitHub. Merging a feature branch on GitHub should usually be followed relatively quickly by deploying the new feature to a staging server for customer feedback. Few systems are designed specifically to audit such TPs, and existing ones, as far as we know, are limited to a single specific tool.We present Bluejay, an open-source extensible platform that uses the APIs of multiple tools to collect raw data, synthesize it into TP measurements, and present dashboards to audit the TPs. A key insight in Bluejay's design is that TPs can be expressed in terminology similar to that used for modeling and auditing Service Level Agreement (SLA) compliance. Bluejay therefore builds on mature tools used in that ecosystem and adapts them for describing, auditing, and reporting on TPs. Bluejay currently consumes data from five different widely-used development tools, and can be customized by connecting it to any service with a REST API. Video showcase available at governify.io/showcase/bluejay
After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged buildings images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~ 6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and when ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important regions on the images that facilitate the decision.
After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.