With the breakthroughs in machine learning and computing infrastructures that have led to significant performance improvements in cognitive robotics, the challenge of trajectory-continuous task creation persists. Various constraints in the physical capability of robots, environmental changes and long-time reliance on sequential dependencies between inter-joint and intra-joint relationships made the work exceptionally hard. Many robot environments function under structured static work-cell completing extended series of subtasks. The conventional descriptors for robot trajectory rely on symbolic rules with human intelligence, which involves skilled individuals and possesses significant limitations, such as being time-consuming and requiring enhanced adaptability due to the static nature of task descriptions alone.On the other hand, reinforcement learning is an empiricism-based approach that learns through iterative interaction with the environment. However, the resource requirements for achieving convergence and the need for appropriate infrastructure can be substantial, especially in complex environments with a large action space that can pose significant challenges. Artificially inculcating innate prior knowledge is introduced with a dataset to reduce the search space in the symbolic trajectory learner.The suggested technique employs a probabilistic network and data-efficient modelling termed generative adversarial networks, which learns the underlying constraints, probability distributions and arbitrations, along with generating a representation of trajectory instances at each time of sampling. This research also proposes a way to calculate the robot path accuracy in extrinsic generative models. The model assessment was carried out by utilising a custom-built dataset and robot operating system, yielding encouraging results in robot path accuracy and generated samples.