Soluble N-ethylmaleimide-sensitive factor attachment protein receptors (SNAREs) are membrane-associated trafficking proteins that confer identity to lipid membranes and facilitate membrane fusion. These functions are achieved through the complexing of Q-SNAREs with a specific cognate target R-SNARE, leading to the fusion of their associated membranes. These SNARE complexes then dissociate so that the Q-SNAREs and R-SNAREs can repeat this cycle. Whilst the basic function of SNAREs has been long appreciated, it is becoming increasingly clear that the cell can control the localisation and function of SNARE proteins through posttranslational modifications (PTMs), such as phosphorylation and ubiquitylation. Whilst numerous proteomic methods have shown that SNARE proteins are subject to these modifications, little is known about how these modifications regulate SNARE function. However, it is clear that these PTMs provide cells with an incredible functional plasticity; SNARE PTMs enable cells to respond to an ever-changing extracellular environment through the rerouting of membrane traffic. In this Review, we summarise key findings regarding SNARE regulation by PTMs and discuss how these modifications reprogramme membrane trafficking pathways.
Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning. Prior work mostly maps both domains into a common latent representation in a purely supervised fashion. This is rather restrictive, however, as the two domains follow distinct generative processes. Therefore, we propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately. The information shared between the domains is aligned with an invertible neural network. Our model integrates normalizing flow-based priors for the domain-specific information, which allows us to learn diverse many-to-many mappings between the two domains. We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis.
One of the key challenges in learning joint embeddings of multiple modalities, e.g. of images and text, is to ensure coherent cross-modal semantics that generalize across datasets. We propose to address this through joint Gaussian regularization of the latent representations. Building on Wasserstein autoencoders (WAEs) to encode the input in each domain, we enforce the latent embeddings to be similar to a Gaussian prior that is shared across the two domains, ensuring compatible continuity of the encoded semantic representations of images and texts. Semantic alignment is achieved through supervision from matching imagetext pairs. To show the benefits of our semi-supervised representation, we apply it to cross-modal retrieval and phrase localization. We not only achieve state-of-the-art accuracy, but significantly better generalization across datasets, owing to the semantic continuity of the latent space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.