Handwriting learning delays should be addressed early to prevent their exacerbation and long-lasting consequences on whole children’s lives. Ideally, proper training should start even before learning how to write. This work presents a novel method to disclose potential handwriting problems, from a pre-literacy stage, based on drawings instead of words production analysis. Two hundred forty-one kindergartners drew on a tablet, and we computed features known to be distinctive of poor handwriting from symbols drawings. We verified that abnormal features patterns reflected abnormal drawings, and found correspondence in experts’ evaluation of the potential risk of developing a learning delay in the graphical sphere. A machine learning model was able to discriminate with 0.75 sensitivity and 0.76 specificity children at risk. Finally, we explained why children were considered at risk by the algorithms to inform teachers on the specific weaknesses that need training. Thanks to this system, early intervention to train specific learning delays will be finally possible.
Neural Architecture Search (NAS) is the process of automating architecture engineering, searching for the best deep learning configuration. One of the main NAS approaches proposed in the literature, Progressive Neural Architecture Search (PNAS), seeks for the architectures with a sequential modelbased optimization strategy: it defines a common recursive structure to generate the networks, whose number of building blocks rises through iterations. However, NAS algorithms are generally designed for an ideal setting without considering the needs and the technical constraints imposed by practical applications. In this paper, we propose a new architecture search named Pareto-Optimal Progressive Neural Architecture Search (POPNAS) that combines the benefits of PNAS to a time-accuracy Pareto optimization problem. POPNAS adds a new time predictor to the existing approach to carry out a joint prediction of time and accuracy for each candidate neural network, searching through the Pareto front. This allows us to reach a trade-off between accuracy and training time, identifying neural network architectures with competitive accuracy in the face of a drastically reduced training time.
In machine learning, differential privacy and federated learning concepts are gaining more and more importance in an increasingly interconnected world. While the former refers to the sharing of private data characterized by strict security rules to protect individual privacy, the latter refers to distributed learning techniques in which a central server exchanges information with different clients for machine learning purposes. In recent years, many studies have shown the possibility of bypassing the privacy shields of these systems and exploiting the vulnerabilities of machine learning models, making them leak the information with which they have been trained. In this work, we present the 3DGL framework, an alternative to the current federated learning paradigms. Its goal is to share generative models with high levels of ε-differential privacy. In addition, we propose DDP-βVAE, a deep generative model capable of generating synthetic data with high levels of utility and safety for the individual. We evaluate the 3DGL framework based on DDP-βVAE, showing how the overall system is resilient to the principal attacks in federated learning and improves the performance of distributed learning algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.