Pre-trained models for Natural Languages (NL) like BERT and GPT have been recently shown to transfer well to Programming Languages (PL) and largely benefit a broad set of code-related tasks. Despite their success, most current methods either rely on an encoder-only (or decoder-only) pre-training that is suboptimal for generation (resp. understanding) tasks or process the code snippet in the same way as NL, neglecting the special characteristics of PL such as token types. We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code. Our code and pre-trained models are released at https: //github.com/salesforce/CodeT5.
Pre-trained models for Natural Languages (NL) like BERT and GPT have been recently shown to transfer well to Programming Languages (PL) and largely benefit a broad set of code-related tasks. Despite their success, most current methods either rely on an encoder-only (or decoder-only) pre-training that is suboptimal for generation (resp. understanding) tasks or process the code snippet in the same way as NL, neglecting the special characteristics of PL such as token types. We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code. Our code and pre-trained models are released at https: //github.com/salesforce/CodeT5.
The C2C12 cell line is frequently used as a model of skeletal muscle differentiation. In our serumfree defined culture system differentiation of C2C12 cells into myotubes required surface-bound signals such as substrate adsorbed vitronectin or laminin. Based on this substrate-requirement of myotube formation, we developed a photolithography-based method to pattern C2C12 myotubes, where myotubes formed exclusively on vitronectin surface patterns. We have determined that the optimal line width to form single myotubes is approximately 30 μm. In order to illustrate a possible application of this method, we patterned myotubes on the top of commercial substrate-embedded microelectrodes. In contrast to previous experiments where cell patterning was achieved by selective attachment of the cells to patterned surfaces in a medium that contained all the factors necessary for differentiation, this study illustrates that surface patterning of a signaling molecule, which is essential for skeletal muscle differentiation in a defined system, can result in the formation of aligned myotubes on the patterns. This technique is being developed for applications in cell biology, tissue engineering and robotics.
Monodisperse nanospheres are formed by coordination polymerization tetrakis(4-pyridyl)porphyrin-metal complexes with chloroplatinic acid in aqueous solution. The porphyrin nanospheres and their platinized nanocomposites have potential applications in catalysis and solar energy conversion systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.