Fault detection and identification are new and challenging tasks for electrical generation plants that are based on solid oxide fuel cells. The use of a quantitative model of the plant together with a support vector machine to design and operate a supervised classification system is proposed. This type of system, which uses a few easy-tomeasure features selected through the maximisation of a classification error bound, proved to be effective in revealing a faulty condition and identifying it among the four considered fault classes.Introduction: Solid oxide fuel cells (SOFCs) are electrochemical reactors that have been extensively studied for their application in power generation plants, for both stationary and transportation use [1]. Although SOFCs exhibit high energy conversion efficiencies and environmental compatibilities, systems based on these fuel cells still suffer from low reliability (of both the fuel cells and the entire system) and a limited lifetime. Thus, automatic fault detection and identification (FDI) are of paramount importance for the development of SOFC power generation plants and their commercial diffusion.Several FDI methods have been proposed and applied to a variety of systems in the chemical and mechanical engineering fields [2, 3]. Since the physical processes governing the functioning of an SOFC-based power plant are known, a quantitative model of the system under analysis can be developed. This type of model allows for reliable and physically motivated fault diagnosis and represents an ideal basis to develop FDI techniques. Although the model can be used offline to generate reliable data for the power plant under both healthy and faulty conditions, the FDI task requires a technique that, using such data, reveals and classifies the faults while considering online data.Pattern recognition is a discipline that has long enabled the development of FDI techniques [2]. However, to the best of the authors' knowledge, none of these techniques have been proposed in the context of SOFC systems. A recent report addressing fault diagnosis in SOFC systems [4] only discusses the fault tree and fault signature matrix (i.e. deductive top-down tools, typically used in safety and reliability engineering, based on binary information) as FDI approaches. Although FDI has received more attention in proton exchange membrane fuel cells [5], pattern recognition techniques have been rarely used for fault classification.In this Letter, we propose a supervised classification technique that is based on a support vector machine (SVM) combined with a quantitative model of the SOFC system and performs FDI during the operation of a power generation plant. SVMs are well known in pattern recognition because of their generalisation properties and accurate results in multiple applications [6,7]. In the proposed FDI technique, SVMs are used to classify faults and a feature selection method based on the maximisation of an analytical error bound [8] is developed to determine the most relevant physical variables to be measured...
Downlink beamforming is a key technology for cellular networks. However, computing the transmit beamformer that maximizes the weighted sum rate subject to a power constraint is an NP-hard problem. As a result, iterative algorithms that converge to a local optimum are used in practice. Among them, the weighted minimum mean square error (WMMSE) algorithm has gained popularity, but its computational complexity and consequent latency has motivated the need for lower-complexity approximations at the expense of performance.Motivated by the recent success of deep unfolding in the tradeoff between complexity and performance, we propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel. The main idea consists of mapping a fixed number of iterations of the WMMSE algorithm into trainable neural network layers, whose architecture reflects the structure of the original algorithm. With respect to traditional end-to-end learning, deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability. However, the formulation of the WMMSE algorithm, as described in Shi et al., is not amenable to be unfolded due to a matrix inversion, an eigendecomposition, and a bisection search performed at each iteration. Therefore, we present an alternative formulation that circumvents these operations by resorting to projected gradient descent. By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations, with the advantage of a lower computational load.
The weighted minimum mean square error (WMMSE) algorithm was proposed to provide a locally optimum solution to the otherwise NP-hard weighted sum rate maximization beamforming problem, but it can still be prohibitively complex for real-time implementation. With the success of deep unfolding in trading off complexity and performance, we propose to apply deep unfolding to the WMMSE algorithm. With respect to traditional end-to-end learning, deep unfolding incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability. However, the classical formulation of the WMMSE algorithm given by Shi et al. is not amenable for deep unfolding due to matrix inversions, eigendecompositions, and bisection searches. Therefore, we present an alternative formulation that circumvents these operations. By means of simulations, we show that the deep unfolded WMMSE algorithm performs on par with the original WMMSE algorithm, at a lower computational load.
Downlink beamforming is a key technology for cellular networks. However, computing beamformers that maximize the weighted sum rate (WSR) subject to a power constraint is an NP-hard problem. The popular weighted minimum mean square error (WMMSE) algorithm converges to a local optimum but still exhibits considerable complexity.In order to address this trade-off between complexity and performance, we propose to apply deep unfolding to the WMMSE algorithm for a MU-MISO downlink channel. The main idea consists of mapping a fixed number of iterations of the WMMSE into trainable neural network layers. However, the formulation of the WMMSE algorithm, as provided in Shi et al., involves matrix inversions, eigendecompositions, and bisection searches. These operations are hard to implement as standard network layers. Therefore, we present a variant of the WMMSE algorithm i) that circumvents these operations by applying a projected gradient descent and ii) that, as a result, involves only operations that can be efficiently computed in parallel on hardware platforms designed for deep learning. We demonstrate that our variant of the WMMSE algorithm convergences to a stationary point of the WSR maximization problem and we accelerate its convergence by incorporating Nesterov acceleration and a generalization thereof as learnable structures. By means of simulations, we show that the proposed network architecture i) performs on par with the WMMSE algorithm truncated to the same number of iterations, yet at a lower complexity, and ii) generalizes well to changes in the channel distribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.