Very recently Warma [42] has shown that for nonlocal PDEs associated with the fractional Laplacian, the classical notion of controllability from the boundary does not make sense and therefore it must be replaced by a control that is localized outside the open set where the PDE is solved. Having learned from the above mentioned result, in this paper we introduce a new class of source identification and optimal control problems where the source/control is located outside the observation domain where the PDE is satisfied. The classical diffusion models lack this flexibility as they assume that the source/control is located either inside or on the boundary. This is essentially due to the locality property of the underlying operators. We use the nonlocality of the fractional operator to create a framework that now allows placing a source/control outside the observation domain. We consider the Dirichlet, Robin and Neumann source identification or optimal control problems. These problems require dealing with the nonlocal normal derivative (that we shall call interaction operator). We create a functional analytic framework and show well-posedness and derive the first order optimality conditions for these problems. We introduce a new approach to approximate, with convergence rate, the Dirichlet problem with nonzero exterior condition. The numerical examples confirm our theoretical findings and illustrate the practicality of our approach.where the source is either f (force or load) or z (boundary control) see [6,29,36]. In (1.1) there is no provision to place the source in Ω (cf. Figure 1). The issue is that the operator ∆ has "lesser 2010 Mathematics Subject Classification. 49J20, 49K20, 35S15, 65R20, 65N30.
In this work we consider a generalized bilevel optimization framework for solving inverse problems. We introduce fractional Laplacian as a regularizer to improve the reconstruction quality, and compare it with the total variation regularization. We emphasize that the key advantage of using fractional Laplacian as a regularizer is that it leads to a linear operator, as opposed to the total variation regularization which results in a nonlinear degenerate operator. Inspired by residual neural networks, to learn the optimal strength of regularization and the exponent of fractional Laplacian, we develop a dedicated bilevel optimization neural network with a variable depth for a general regularized inverse problem. We also draw some parallels between an activation function in a neural network and regularization. We illustrate how to incorporate various regularizer choices into our proposed network. As an example, we consider tomographic reconstruction as a model problem and show an improvement in reconstruction quality, especially for limited data, via fractional Laplacian regularization. We successfully learn the regularization strength and the fractional exponent via our proposed bilevel optimization neural network. We observe that the fractional Laplacian regularization outperforms total variation regularization. This is specially encouraging, and important, in the case of limited and noisy data.
This paper introduces a novel algorithmic framework for a deep neural network (DNN), which in a mathematically rigorous manner, allows us to incorporate history (or memory) into the network—it ensures all layers are connected to one another. This DNN, called Fractional-DNN, can be viewed as a time-discretization of a fractional in time non-linear ordinary differential equation (ODE). The learning problem then is a minimization problem subject to that fractional ODE as constraints. We emphasize that an analogy between the existing DNN and ODEs, with standard time derivative, is well-known by now. The focus of our work is the Fractional-DNN. Using the Lagrangian approach, we provide a derivation of the backward propagation and the design equations. We test our network on several datasets for classification problems. Fractional-DNN offers various advantages over the existing DNN. The key benefits are a significant improvement to the vanishing gradient issue due to the memory effect, and better handling of nonsmooth data due to the network’s ability to approximate non-smooth functions.
We consider a continuous version of the Hegselmann-Krause model of opinion dynamics. Interaction between agents either leads to a state of consensus, where agents converge to a single opinion as time evolves, or to a fragmented state with multiple opinions. In this work, we linearize the system about a uniform density solution and predict consensus or fragmentation based on properties of the resulting dispersion relation. This prediction is different depending on whether the initial agent distribution is uniform or nearly uniform. In the uniform case, we observe traveling fronts in the agent based model and make predictions for the speed and pattern selected by this front.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.