“…Another potential trend is building DNNs using biophysical ( Tareen and Kinney, 2019 ) or physicochemical properties ( Yang et al, 2017 ; Liu et al, 2020 ), as deep models trained on these features might uncover novel patterns in data and lead to improved understanding of the physicochemical principles of protein-nucleic acid regulatory interactions, as well as aid model interpretability. Other novel approaches include: 1) modifying DNN properties to improve recovery of biologically meaningful motif representations ( Koo and Ploenzke, 2021 ), 2) transformer networks ( Devlin et al, 2018 ) and attention mechanisms ( Vaswani et al, 2017 ), widely used in protein sequence modeling ( Jurtz et al, 2017 ; Rao et al, 2019 ; Vig et al, 2020 ; Repecka et al, 2021 ), 3) graph convolutional neural networks, a class of DNNs that can work directly on graphs and take advantage of their structural information, with the potential to give us great insights if we can reframe genomics problems as graphs ( Cranmer et al, 2020 ; Strokach et al, 2020 ), and 4) generative modeling ( Foster, 2019 ), which may help exploit current knowledge in designing synthetic sequences with desired properties ( Killoran et al, 2017 ; Wang Y. et al, 2020 ). With the latter, unsupervized training is used with approaches including: 1) autoencoders, which learn efficient representations of the training data, typically for dimensionality reduction ( Way and Greene, 2018 ) or feature selection ( Xie et al, 2017 ), 2) generative adversarial networks, which learn to generate new data with the same statistics as the training set ( Wang Y. et al, 2020 ; Repecka et al, 2021 ), and 3) deep belief networks, which learn to probabilistically reconstruct their inputs, acting as feature detectors, and can be further trained with supervision to build efficient classifiers ( Bu et al, 2017 ).…”