N-Methyl-D-aspartate (NMDA)-type glutamate receptors play important roles at developing synapses and in activity-dependent synaptic plasticity. Recent studies in Aplysia suggest that NMDA-like receptors may contribute to some forms of plasticity of sensorimotor synapses accompanying associative learning. We examined at various times after plating neurons in culture the contribution of NMDA- and alpha-amino-3 hydroxy-5 methyl-4 isoxazole proprionic acid (AMPA)-like glutamate receptors to responses evoked in motor cell L7 either by action potentials in sensory neurons (SNs) or by focal applications of glutamate. We found that (D,L)-2-amino-5-phosphopentoic acid-sensitive receptors contributed significantly to postsynaptic responses in 1-day cultures but contributed little in the same cultures on day 4. By contrast, postsynaptic responses on day 4 increased significantly in amplitude by the addition of functional 6-cyano-7 nitroquinoxaline-2,3-dione- or 1-(4-aminophenyl)-4-methyl-7,8-methylendioxy-5H-2,3-benzodiazepine hydrochloride-sensitive receptors. Receptors with NMDA-like properties are detected on day 1 only at sites on L7 apposed to SN varicosities, and are not detected on L7 cultured alone. The results indicate that changes in expression and distribution of functional receptors on L7 accompany the formation and maturation of SN synapses. Signals from the SN appear to trigger expression and clustering of functional NMDA-like receptors at sites contacted by presynaptic structures capable of transmitter release. With time, functional AMPA-like receptors are added to these sites enhancing synaptic efficacy. The results are consistent with the idea that the expression and sequential clustering of NMDA- and AMPA-type receptors may be essential for the formation and maturation of central synapses.
The latest biological findings discover that the motionless 'lock-and-key' theory is no longer applicable and the flexibility of both the receptor and ligand plays a significant role in helping understand the principles of the binding affinity prediction. Based on this mechanism, molecular dynamics (MD) simulations have been invented as a useful tool to investigate the dynamical properties of this molecular system. However, the computational expenditure prohibits the growth of reported protein trajectories. To address this insufficiency, we present a novel spatial-temporal pre-training protocol, PretrainMD, to grant the protein encoder the capacity to capture the time-dependent geometric mobility along MD trajectories. Specifically, we introduce two sorts of self-supervised learning tasks: an atom-level denoising generative task and a protein-level snapshot ordering task. We validate the effectiveness of PretrainMD through the PDBbind dataset for both linear-probing and fine-tuning. Extensive experiments show that our PretrainMD exceeds most state-of-the-art methods and achieves comparable performance. More importantly, through visualization, we discover that the learned representations by pre-training on MD trajectories without any label from the downstream task follow similar patterns of the magnitude of binding affinities. This strongly aligns with the fact that the motion of the interactions of protein and ligand maintains the key information of their binding. Our work provides a promising perspective of self-supervised pre-training for protein representations with very fine temporal resolutions and hopes to shed light on the further usage of MD simulations for the biomedical deep learning community.
Geometric deep learning has recently achieved great success in non-Euclidean domains, and learning on 3D structures of large biomolecules is emerging as a distinct research area. However, its efficacy is largely constrained due to the limited quantity of structural data. Meanwhile, protein language models trained on substantial 1D sequences have shown burgeoning capabilities with scale in a broad range of applications. Nevertheless, no preceding studies consider combining these different protein modalities to promote the representation power of geometric neural networks. To address this gap, we make the foremost step to integrate the knowledge learned by well-trained protein language models into several state-of-the-art geometric networks. Experiments are evaluated on a variety of protein representation learning benchmarks, including protein-protein interface prediction, model quality assessment, protein-protein rigid-body docking, and binding affinity prediction, leading to an overall improvement of 20\% over baselines and the new state-of-the-art performance. Strong evidence indicates that the incorporation of protein language models' knowledge enhances geometric networks' capacity by a significant margin and can be generalized to complex tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.