The recent mass publicity of court decisions in China, this article argues, is part of the larger trend of the Chinese judiciary becoming increasingly centralized. The transparency reform enables the Supreme People's Court to directly control the information reporting process within the judicial hierarchy and rein in local courts through public scrutiny, thereby functioning as a solution to the agency problem between the central and the local governments. Interestingly, evidence shows that local courts responded strategically, rendering disclosure of decisions far from the level that the central government requires. In the meantime, the central government has dispatched increasing amounts of judicial cadres to local courts, and such provincial judicial cadres are associated with more than a 10% higher disclosure rate of judicial decisions, suggesting that centralization of personnel is used as a tool to effectively implement the SPC's centralized policy. The transparency reform, coinciding with reforms in many other domains, embodies an important shift toward a more centralized judicial sector in China.
Solving partial differential equations (PDEs) is a central task in scientific computing. Recently, neural network approximation of PDEs has received increasing attention due to its flexible meshless discretization and its potential for high-dimensional problems. One fundamental numerical difficulty is that random samples in the training set introduce statistical errors into the discretization of loss functional which may become the dominant error in the final approximation, and therefore overshadow the modeling capability of the neural network. In this work, we propose a new minmax formulation to optimize simultaneously the approximate solution, given by a neural network model, and the random samples in the training set, provided by a deep generative model. The key idea is to use a deep generative model to adjust random samples in the training set such that the residual induced by the approximate PDE solution can maintain a smooth profile when it is being minimized. Such an idea is achieved by implicitly embedding the Wasserstein distance between the residual-induced distribution and the uniform distribution into the loss, which is then minimized together with the residual. A nearly uniform residual profile means that its variance is small for any normalized weight function such that the Monte Carlo approximation error of the loss functional is reduced significantly for a certain sample size. The adversarial adaptive sampling (AAS) approach proposed in this work is the first attempt to formulate two essential components, minimizing the residual and seeking the optimal training set, into one minmax objective functional for the neural network approximation of PDEs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.