Clustering problems often arise in the fields like data mining, machine learning and computational biology to group a collection of objects into similar groups with respect to a similarity measure. For example, clustering can be used to group genes with related expression patterns. Covering problems are another important class of problems, where the task is to select a subset of objects from a larger set, such that the objects in the subset "cover" (or contain) a given set of elements. Covering problems have found applications in various fields including wireless and sensor networks, VLSI, and image processing. For example, covering can be used to find placement locations of the minimum number of mobile towers to serve all the customers of a region. In this dissertation, we consider an interesting collection of geometric clustering and covering problems, which are modeled as optimization problems. These problems are known to be NP-hard, i.e. no efficient algorithms are expected to be found for these problems that return optimal solutions. Thus, we focus our effort in designing efficient approximation algorithms for these problems that yield near-optimal solutions. In this work, we study three clustering problems: k-means, k-clustering and Non-Uniformk-center and one covering problem: Metric Capacitated Covering. k-means is one of the most studied clustering problems and probably the most frequently used clustering problem in practical applications. In this problem, we are given a set of points in an Euclidean space and we want to choose k center points from the same Euclidean space. Each input point is assigned to its nearest chosen center, v and points assigned to a center form a cluster. The cost per input point is the square of its distance from its nearest center. The total cost is the sum of the costs of the points. The goal is to choose k center points so that the total cost is minimized. We give a local search based algorithm for this problem that always returns a solution of cost within (1 + ε)-factor of the optimal cost for any ε > 0. However, our algorithm uses (1+ε)k center points. The best known approximation before our work was about 9 that uses exactly k centers. The result appears in Chapter 2. k-clustering is another popular clustering problem studied mainly by the theory community. In this problem, each cluster is represented by a ball in the input metric space. We would like to choose k balls whose union contains all the input points. The cost of each ball is its radius to the power α for some given paramater α ≥ 1. The total cost is the sum of the costs of the chosen k balls. The goal is to find k balls such that the total cost is minimized. We give a probabilistic metric partitioning based algorithm for this problem that always returns a solution of cost within (1 + ε)-factor of the optimal cost for any ε > 0. However, our algorithm uses (1 + ε)k balls, and the running time is quasi-polynomial. The best known approximation in polynomial time is c α that uses exactly k balls, where c is a const...