Recent studies have proposed employing biologically plausible recurrent neural networks (RNNs) to explore flexible decision making processes in the brain. However, the mechanisms underlying the integration of bottom-up factors (such as incoming sensory signals) and top-down factors (such as task instructions and selective attention) remain poorly understood, both within the context of these models and the brain. To address this question, we trained biologically inspired RNNs on complex cognitive tasks that require adaptive integration of these factors. By performing extensive dynamical systems analyses, we show that our RNN model is capable of seamlessly incorporating top-down signals with sensory signals to perform the complex tasks. Furthermore, through comprehensive local connectivity analyses, we identified important inhibitory feedback signals that efficiently modulate the bottom-up sensory coding in a task-driven manner. Finally, we introduced an anatomical constraint where a specific subgroup of neurons receives the sensory input signal, effectively creating a designated sensory area within the RNN. Through this constraint, we show that these "sensory" neurons possess the remarkable ability to multiplex and dynamically combine both bottom-up and top-down information. These findings are consistent with recent experimental results highlighting that such integration is a key factor in facilitating flexible decision making. Overall, our work provides a framework for generating testable hypotheses for the hierarchical encoding of task-relevant information.