Task offloading is an important concept for edge computing and the Internet of Things (IoT) because computationintensive tasks must be offloaded to more resource-powerful remote devices. Task offloading has several advantages, including increased battery life, lower latency, and better application performance. A task offloading method determines whether sections of the full application should be run locally or offloaded for execution remotely. The offloading choice problem is influenced by several factors, including application properties, network conditions, hardware features, and mobility, influencing the offloading system's operational environment. This study provides a thorough examination of current task offloading and resource allocation in edge computing, covering offloading strategies, algorithms, and factors that influence offloading. Full offloading and partial offloading strategies are the two types of offloading strategies. The algorithms for task offloading and resource allocation are then categorized into two parts: machine learning algorithms and non-machine learning algorithms. We examine and elaborate on algorithms like Supervised Learning, Unsupervised Learning, and Reinforcement Learning (RL) under machine learning. Under the non-machine learning algorithm, we elaborate on algorithms like non(convex) optimization, Lyapunov optimization, Game theory, Heuristic Algorithm, Dynamic Voltage Scaling, Gibbs Sampling, and Generalized Benders Decomposition (GBD). Finally, we highlight and discuss some research challenges and issues in edge computing.