Network slicing (NS) technology promises to provide a variety of services based on diverse latencysensitive over shared infrastructure in Mobile Edge-Cloud (MEC) by creating customized slices for each application. However, to process users' dynamic slice requests, the infrastructure provider (InP) must be the online slice acceptance check and scaled if required. Based on a business model, network revenue is dependent on the acceptance of slices and the provision of infrastructure for them. If an InP does not provide more resources for an active slice, the network is penalized and its revenues are degraded. A proper solution to the aforementioned problem is to use reinforcement learning methods. But most of these methods have challenges in continuous spaces. This work presents a reinforcement learning-based method called approximate Q-learning (AQL) to intelligently slice acceptance control (SAC) to maximize utility in MEC for latency-sensitive services. The core idea of AQL is based on Q-learning, so we have developed some of its functions to adapt to a large area of spaces and actions. We have evaluated the performance of AQL in terms of coverage, cumulative rewards, resource utilization, and revenue. The results show the proposed approach has an acceptable performance.