Many chemical manufacturing and separations processes like solvent extraction comprise hierarchically complex configurations of functional process units. With increasing complexity, strategies that rely on heuristics become less reliable for design optimization. In this study, we explore deep reinforcement learning for mapping the space of feasible designs to find an optimization strategy that can match or exceed the performance of conventional optimization. To this end, we implement a highly configurable learning environment for the solvent design process to which we can couple state‐of‐the‐art deep reinforcement learning agents. We evaluate the trained agents against the heuristic optimization for the solvent process design tasked to optimize recovery efficiency and product purity. Results demonstrated the agent successfully learned the strategy for predicting comparably optimal solvent extraction process designs for varying combinations of feed compositions.
Neural Architecture Search (NAS) is a method of autonomously designing deep learning models to achieve top performance for tasks such as data classification and data retrieval by using defined search spaces and strategies. These strategies have demonstrated improvements in a variety of tasks over ad-hoc deep neural architectures, but have presented unique challenges related to bias in search spaces, the intensive training requirements of various search strategies, and inefficient model performance evaluation. These challenges have been a primary focus for NAS research until recently. However, artificial intelligence (AI) on the edge has emerged as a significant area of research and producing models that achieve top performance on small devices with limited resources has become a priority. NAS research has primarily been focused on improving models by using more diverse search spaces, improving search strategies, and evaluating models faster. A limitation when applied to edge devices is that NAS has historically been finding superior deep neural networks that have become increasingly more difficult to port to embedded devices due to memory limitations, computational bottlenecks, latency requirements, and power restrictions. In recent years, researchers have begun to consider these limitations and develop methods for porting deep neural networks to these embedded devices, but few methods have incorporated the device itself in the training process efficiently. In this paper, we compile a list of methods actively being explored and discuss their limitations. We also present our evidence in support of the use of genetic algorithms as a method for hardware-aware NAS that efficiently considers hardware, power, and latency requirements during training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.