The parallel machine scheduling problem (PMSP) involves the optimized assignment of a set of jobs to a collection of parallel machines, which is a proper formulation for the modern manufacturing environment. Deep reinforcement learning (DRL) has been widely employed to solve PMSP. However, the majority of existing DRL-based frameworks still suffer from generalizability and scalability. More specifically, the state and action design still heavily rely on human efforts. To bridge these gaps, we propose a practical reinforcement learning-based framework to tackle a PMSP with new job arrivals and family setup constraints. We design a variable-length state matrix containing full job and machine information. This enables the DRL agent to autonomously extract features from raw data and make decisions with a global perspective. To efficiently process this novel state matrix, we elaborately modify a Transformer model to represent the DRL agent. By integrating the modified Transformer model to represent the DRL agent, a novel state representation can be effectively leveraged. This innovative DRL framework offers a high-quality and robust solution that significantly reduces the reliance on manual effort traditionally required in scheduling tasks. In the numerical experiment, the stability of the proposed agent during training is first demonstrated. Then we compare this trained agent on 192 instances with several existing approaches, namely a DRL-based approach, a metaheuristic algorithm, and a dispatching rule. The extensive experimental results demonstrate the scalability of our approach and its effectiveness across a variety of scheduling scenarios. Conclusively, our approach can thus solve the scheduling problems with high efficiency and flexibility, paving the way for application of DRL in solving complex and dynamic scheduling problems.