We consider the problem of evaluating an aggregation query, which is a sum-of-sum query or a sum-of-product query, subject to additive inequalities. Such aggregation queries, with a smallish number of additive inequalities, arise naturally/commonly in many applications, particularly in machine learning applications. We give a relatively complete categorization of the computational complexity of such problems. We first show that the problem is NP-hard, even in the case of one additive inequality. Thus we turn to approximating the query. Our main result is an efficient algorithm for approximating, with arbitrarily small relative error, many natural aggregation queries with one additive inequality. We give examples of natural queries that can be efficiently solved using this algorithm. In contrast we show that the situation with two additive inequalities is quite different, by showing that it is NP-hard to evaluate simple aggregation queries, with two additive inequalities, with any bounded relative error. We end by considering the problem of computing the gradient of the objective function in the Support Vector Machines (SVM) problem, a canonical machine learning problem. While computing the gradient for SVM can be reduced to the problem of computing an aggregation query with one additive inequality, our algorithm is not applicable due to what we call the "subtraction problem". However, we show how to circumvent this subtraction problem within the context of SVM to obtain a gradient-descent algorithm that will result in an approximately correct optimal solution, using an alternative notion of approximate correctness, which may be of independent interest.