Multi-agent simulation has attracted much attention in the field of computer animation in the last decades for its ability to model interaction between autonomous micro level entities. It widely uses deep reinforcement learning (DRL), which allows us to model environments and its agents approaching real-world and human-level complexity, with applications in robotics and computer animation, among others. However, DRL multi-agent simulation faces additional challenges: they have to be able to generalize high-dimensional observations and relate them with a high-dimensional action space, maximizing long-term cumulated reward. Due to this, DRL systems with numerous interacting agents seldom consider skeleton-level action spaces. To this end, we present skeleton-level control for multi-agent simulation with DRL. With our method, we are able to procedurally generate real-time collision-free simulations directly on individual agents with a high-dimensional skeleton-level action space. The state in our DRL system includes the velocity of the agent, its destination, and the status of its joints, as well as visual-based information about the environment and other agents. Our reward function encourages motion into the target destinations, and enalizes collision. We provide extensive experimentation to show the ability of agents to reach their goal through its skeleton motion while successfully avoiding inter-collisions.