Since the birth of computer and networks, fuelled by pervasive computing and ubiquitous connectivity, the amount of data stored and transmitted has exponentially grown through the years. Due to this demand, new solutions for storing data are needed, and one promising media is the DNA. This storage solution provides numerous advantages, which includes the ability to store dense information while achieving long-term stability. However, the question as how the data can be retrieved from a DNA-based archive, still remains.In this paper, we aim to address this question by proposing a new storage solution that relies upon molecular communication, and in particular bacterial nanonetworks. Our solution allows digitally encoded information to be stored into non-motile bacteria, which compose an archival architecture of clusters, and to be later retrieved by engineered motile bacteria, whenever reading operations are needed.We conducted extensive simulations, in order to determine the reliability of data retrieval from non-motile storage clusters, placed at different locations. Aiming to assess the feasibility of our solution, we have also conducted wet lab experiments that show how bacteria nanonetworks can effectively retrieve a simple message, such as "Hello World", by conjugation with non-motile bacteria, and finally mobilize towards a final point.
Signed Language Processing (SLP) concerns the automated processing of signed languages, the main means of communication of Deaf and hearing impaired individuals. SLP features many different tasks, ranging from sign recognition to translation and production of signed speech, but it has been overlooked by the NLP community thus far. In this paper, we bring attention to the task of modelling the phonology of sign languages. We leverage existing resources to construct a large-scale dataset of American Sign Language signs annotated with six different phonological properties. We then conduct an extensive empirical study to investigate whether data-driven end-to-end and feature-based approaches can be optimised to automatically recognise these properties. We find that, despite the inherent challenges of the task, graph-based neural networks that operate over skeleton features extracted from raw videos are able to succeed at the task to a varying degree. Most importantly, we show that this performance pertains even on signs unobserved during training.
Artificial agents, particularly humanoid robots, interact with their environment, objects, and people using cameras, actuators, and physical presence. Their communication methods are often pre-programmed, limiting their actions and interactions. Our research explores acquiring non-verbal communication skills through learning from demonstrations, with potential applications in sign language comprehension and expression. In particular, we focus on imitation learning for artificial agents, exemplified by teaching a simulated humanoid American Sign Language. We use computer vision and deep learning to extract information from videos, and reinforcement learning to enable the agent to replicate observed actions. Compared to other methods, our approach eliminates the need for additional hardware to acquire information. We demonstrate how the combination of these different techniques offers a viable way to learn sign language. Our methodology successfully teaches 5 different signs involving the upper body (i.e., arms and hands). This research paves the way for advanced communication skills in artificial agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.