We show a new family of neural networks based on the Schrödinger equation (SE-NET). In this analogy, the trainable weights of the neural networks correspond to the physical quantities of the Schrödinger equation. These physical quantities can be trained using the complex-valued adjoint method. Since the propagation of the SE-NET can be described by the evolution of physical systems, its outputs can be computed by using a physical solver. The trained network is transferable to actual optical systems. As a demonstration, we implemented the SE-NET with the Crank-Nicolson finite difference method on Pytorch. From the results of numerical simulations, we found that the performance of the SE-NET becomes better when the SE-NET becomes wider and deeper. However, the training of the SE-NET was unstable due to gradient explosions when SE-NET becomes deeper. Therefore, we also introduced phase-only training, which only updates the phase of the potential field (refractive index) in the Schrödinger equation. This enables stable training even for the deep SE-NET model because the unitarity of the system is kept under the training. In addition, the SE-NET enables a joint optimization of physical structures and digital neural networks. As a demonstration, we performed a numerical demonstration of end-to-end machine learning (ML) with an optical frontend toward a compact spectrometer. Our results extend the application field of ML to hybrid physical-digital optimizations.