In this paper, we revisit the parameter learning problem, namely the estimation of model parameters for Dynamic Bayesian Networks (DBNs). DBNs are directed graphical models of stochastic processes that encompasses and generalize Hidden Markov models (HMMs) and Linear Dynamical Systems (LDSs). Whenever we apply these models to economics and finance, we are forced to make some modeling assumptions about the state dynamics and the graph topology (the DBN structure). These assumptions may be incorrectly specified and contain some additional noise compared to reality. Trying to use a best fit approach through maximum likelihood estimation may miss this point and try to fit at any price these models on data. We present here a new methodology that takes a radical point of view and instead focus on the final efficiency of our model. Parameters are hence estimated in terms of their efficiency rather than their distributional fit to the data. The resulting optimization problem that consists in finding the optimal parameters is a hard problem. We rely on Covariance Matrix Adaptation Evolution Strategy (CMA-ES) method to tackle this issue. We apply this method to the seminal problem of trend detection in financial markets. We see on numerical results that the resulting parameters seem less error prone to over fitting than traditional moving average cross over trend detection and perform better. The method developed here for algorithmic trading is general. It can be applied to other real case applications whenever there is no physical law underlying our DBNs.E. Benhamou, J. Atif, R. Laraki/Learning in Dynamic Bayesian Graphical Modelwhere the parents of a node x are denoted by π x . A dynamic Bayesian Network (DBN) is defined as a pair (B 0 , B 2d ) where B 0 is a traditional Bayesian network representing the initial or a priori distribution of random variables, that can be related to time 0 and where B 2d is a dynamic two-step Bayesian network describing the transition from time t−1 to time t with the probability P(x t x t−1 ) for any node x belonging to V , in a directed acyclic graph G = (V, E). The joined probability for two sets of nodes V t and V t−1 is given byThe factorized joint probability law is computed by tracing the sequence in the graph over the time sequence. If we denote by T the total length of the path and by P(V 0 ) the joined probability of the initial network B 0 , the probability to go from V 0 to T is given by:A dynamic Bayesian network thus respects the Markov property, which expresses that conditional distributions at time t depend only on the state at time t−1. Dynamic Bayesian networks generalize probabilistic models such as Hidden Markov Model (HMM), and Kalman filter (KF). Apart from the mainstream Kalman filter and HMM models whose DAG is given by 1, more complex DBN can include multi input network with connection between observable and previous latent variables as provided by 2. Another example is the combination of Kalman Filtering (KF) model and echo neural network (ESN) as provided by figure...