Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The chemical complexity of petroleum products has, until recently, precluded calculational techniques based on a definitive analysis of the molecular species involved. Instead, readily observed or easily (quick and inexpensively) measured macroscopic properties (gravity, boiling range, etc.) have been used to correlate information empirically. In some instances, the predictions have been “tweaked” on a local data‐centric basis or extended by quasi‐theoretical analysis. Recent computer‐aided advancements 134 in multivariate electromagnetic emission, NMR (nuclear magnetic resonance) and/or MS (mass spectroscopy) analyses allow for the reliable and repeatable use of much more detailed experimental data. The ability to speciate cuts by individual chemical components will revolutionize both refining and regulation of refined products in response to the availability of quality data. So why would one bother with older analysis and prediction methods? It is likely, given the operational sophistication and investment costs needed to deliver this highly speciated information properly, that the old ways will die but slowly. Most refineries and petrochemical plants try to get by with the minimum amount of expensive on‐line analyzers. Multivariate data generation is not yet at a state where it can easily be used for real‐time process or even product quality control. This barrier will exist for some time to come. The historical development of crude oil processing technology has generated a great volume of data and “general” correlations, some contradictory. Most of the data upon which these methods are based have been around for 60 or more years. Many have withstood the “test of time”, being examined and verified by numerous researchers both public and private. Some of the precision of the older data may not be as good as what is generated today, but new entries to various refinery products' databanks seem to confirm the validity of the best of the older procedures and data. The classical methods allow one to impute several otherwise unavailable characterizing pieces of data. It is essential that those engaged in the design and analysis of refinery units (which have multiple feed and product streams) become familiar with classical terminology and techniques, and eventually acquire an intuitive sense of what constitutes the “best answer” when confronted with several conflicting alternatives. The subject of this article is the proper methodology of imputing this knowledge from whatever data are at hand.
The chemical complexity of petroleum products has, until recently, precluded calculational techniques based on a definitive analysis of the molecular species involved. Instead, readily observed or easily (quick and inexpensively) measured macroscopic properties (gravity, boiling range, etc.) have been used to correlate information empirically. In some instances, the predictions have been “tweaked” on a local data‐centric basis or extended by quasi‐theoretical analysis. Recent computer‐aided advancements 134 in multivariate electromagnetic emission, NMR (nuclear magnetic resonance) and/or MS (mass spectroscopy) analyses allow for the reliable and repeatable use of much more detailed experimental data. The ability to speciate cuts by individual chemical components will revolutionize both refining and regulation of refined products in response to the availability of quality data. So why would one bother with older analysis and prediction methods? It is likely, given the operational sophistication and investment costs needed to deliver this highly speciated information properly, that the old ways will die but slowly. Most refineries and petrochemical plants try to get by with the minimum amount of expensive on‐line analyzers. Multivariate data generation is not yet at a state where it can easily be used for real‐time process or even product quality control. This barrier will exist for some time to come. The historical development of crude oil processing technology has generated a great volume of data and “general” correlations, some contradictory. Most of the data upon which these methods are based have been around for 60 or more years. Many have withstood the “test of time”, being examined and verified by numerous researchers both public and private. Some of the precision of the older data may not be as good as what is generated today, but new entries to various refinery products' databanks seem to confirm the validity of the best of the older procedures and data. The classical methods allow one to impute several otherwise unavailable characterizing pieces of data. It is essential that those engaged in the design and analysis of refinery units (which have multiple feed and product streams) become familiar with classical terminology and techniques, and eventually acquire an intuitive sense of what constitutes the “best answer” when confronted with several conflicting alternatives. The subject of this article is the proper methodology of imputing this knowledge from whatever data are at hand.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.