Polymer conjugation increases an enzyme's circulation time and stability for use as a therapeutic agent, but this attachment indubitably affects its properties. Covalent attachment of multiple polyethylene glycol chains with sizes of either 2, 5, 10, or 20 kDa increases the molecular weight and hydrodynamic radius of the model enzyme trypsin. The sizes of these polymer-enzyme conjugates are increased to be within the recommended limits for PDEPT applications. The T(d) increases from 49 to 60 °C to expand the enzyme's workable range of conditions. This functionalization with PEG polymers of varying lengths maintains trypsin's enzymatic activity. Conjugate activities are 79-120% that of native trypsin at room temperature and 221-432% that of trypsin at 37 °C.
In this paper, rather than focusing on genes as an organising concept around which historical considerations of theory and practice in genetics are elucidated, we place genetic markers at the heart of our analysis. This reflects their central role in the subject of our account, livestock genetics concerning the domesticated pig, Sus scrofa. We define a genetic marker as a (usually material) element existing in different forms in the genome, that can be identified and mapped using a variety (and often combination) of quantitative, classical and molecular genetic techniques. The conjugation of pig genome researchers around the common object of the marker from the early-1990s allowed the distinctive theories and approaches of quantitative and molecular genetics concerning the size and distribution of gene effects to align (but never fully integrate) in projects to populate genome maps. Critical to this was the nature of markers as ontologically inert, internally heterogeneous and relational. Though genes as an organising and categorising principle remained important, the particular concatenation of limitations, opportunities, and intended research goals of the pig genetics community, meant that a progressively stronger focus on the identification and mapping of markers rather than genes per se became a hallmark of the community. We therefore detail a different way of doing genetics to more gene-centred accounts. By doing so, we reveal the presence of practices, concepts and communities that would otherwise be hidden.
DNA sequencing has been characterised by scholars and life scientists as an example of 'big', 'fast' and 'automated' science in biology. This paper argues, however, that these characterisations are a product of a particular interpretation of what sequencing is, what I call 'thin sequencing'. The 'thin sequencing' perspective focuses on the determination of the order of bases in a particular stretch of DNA. Based upon my research on the pig genome mapping and sequencing projects, I provide an alternative 'thick sequencing' perspective, which also includes a number of practices that enable the sequence to travel across and be used in wider communities. If we take sequencing in the thin manner to be an event demarcated by the determination of sequences in automated sequencing machines and computers, this has consequences for the historical analysis of sequencing projects, as it focuses attention on those parts of the work of sequencing that are more centralised, fast (and accelerating) and automated. I argue instead that sequencing can be interpreted as a more openended process including activities such as the generation of a minimum tile path or annotation, and detail the historiographical and philosophical consequences of this move. Highlights:-DNA sequencing is primarily understood by a 'thin sequencing' perspective.-I propose a 'thick sequencing' perspective.-Thick sequencing includes different stages of assembly, evaluation and annotation.-An alternative picture of the nature and organisation of sequencing is presented.
From the 1980s onwards, the Roslin Institute and its predecessor organizations faced budget cuts, organizational upheaval and considerable insecurity. Over the next few decades, it was transformed by the introduction of molecular biology and transgenic research, but remained a hub of animal geneticists conducting research aimed at the livestock-breeding industry. This paper explores how these animal geneticists embraced genomics in response to the many-faceted precarity that the Roslin Institute faced, establishing it as a global centre for pig genomics research through forging and leading the Pig Gene Mapping Project (PiGMaP); developing and hosting resources, such as a database for genetic linkage data; and producing associated statistical and software tools to analyse the data. The Roslin Institute leveraged these resources to play a key role in further international collaborations as a hedge against precarity. This adoption of genomics was strategically useful, as it took advantage of policy shifts at the national and European levels towards funding research with biotechnological potential. As genomics constitutes a set of infrastructures and resources with manifold uses, the development of capabilities in this domain also helped Roslin to diversify as a response to precarity.
The history of genomic research on the pig (Sus scrofa)—as uncovered through archival research, oral histories, and the analysis of a quantitative dataset and co-authorship network—demonstrates the importance of two distinct genealogies. These consist of research programs focused on agriculturally oriented genetics, on the one hand, and systematics research concerned with evolution and diversity, on the other. The relative weight of these two modes of research shifted following the production of a reference genome for the species from 2006 to 2011. Before this inflection point, the research captured in our networks mainly involved intensive sequencing that concentrated primarily on increasing the resolution of genomic data both in particular regions and more widely across the genome. Sequencing practices later became more extensive, with greater focus on the generation and comparison of sequence data across and between populations. We explain these shifts in research modes as a function of the availability, circulation, distribution, and exchange of genomic tools and resources—including data and materials—concerning the pig in general, and increasingly for particular populations. Consequently, we describe the history of pig genomics as constituting a kind of bricolage, in which geneticists cobbled together resources to which they had access—often ones produced by them for other purposes—in pursuit of their research aims. The concept of bricolage adds to the thicker vision of genomics that we have shown throughout the special issue and further highlights the singularity of the dominant, thin narrative focused on the production of the human reference sequence at large-scale genome centers. This essay is part of a special issue entitled The Sequences and the Sequencers: A New Approach to Investigating the Emergence of Yeast, Human, and Pig Genomics, edited by Michael García-Sancho and James Lowe.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.