Given a set of points in the Euclidean space R with > 1, the pairwise distances between the points are determined by their spatial location and the metric d that we endow R with. Hence, the distance d(x, y) = δ between two points is fixed by the choice of x and y and d. We study the related problem of fixing the value δ, and the points x, y, and ask if there is a topological metric d that computes the desired distance δ. We demonstrate this problem to be solvable by constructing a metric to simultaneously give desired pairwise distances between up to O( √ ) many points in R . In particular, these distances can be chosen independent of any "natural" distance between the given points, such as Euclidean or others. Towards dropping the limit on how many points (at fixed locations) we can put into desired distance from one another, we then introduce the notion of an ε-semimetric d. This function has all properties of a metric, but allows violations of the triangle inequality up to an additive error < ε. With this (mild) generalization of a topological metric, we formulate our main result: for all ε > 0, for all m ≥ 1, for any choice of m points y1, . . . , ym ∈ R , and all chosen sets of values {δij ≥ 0 : 1 ≤ i < j ≤ m}, there exists an ε-semimetric δ : R ×R → R such that d(yi, yj) = δij, i.e., the desired distances are accomplished, irrespectively of the topology that the Euclidean or other norms would induce. The order of quantifiers is important here: we first can choose the accuracy ε by which our semi-metric may be violate the triangle inequality (while leaving the other metric axioms to hold as usual for d), then fix the spatial locations of points, and after that step, choose the distances that we wish between our points. We showcase our results by using them to "attack" unsupervised learning algorithms, specifically k-Means and density-based (DBSCAN) clustering algorithms. These have manifold applications in artificial intelligence, and letting them run with externally provided distance measures constructed in the way as shown here, can make clustering algorithms produce results that are pre-determined and hence malleable. This demonstrates that the