“…Let Θ(x, y) = 0 for all x, y ∈ C , S n = I for all n ∈ N, γ ≡ 1, B ≡ I and f := x in Theorem 3.1, then MEP(Θ, ϕ) = Argmin(ϕ). It follows from Theorem 3.1 that the iterative sequence {x n } defined by x 1 = x ∈ C chosen arbitrarily, u n = argmin y∈C ϕ(y) + 1 2r n y − x n 2 , y n = P C (u n − λ n Au n ), x n+1 = α n x + β n x n + (1 − β n − α n )P C (u n − λ n Ay n ), ∀n ≥ 1, x n+1 = α n x + β n x n + (1 − β n − α n )u n , ∀n ≥ 1, We remark that the algorithms (3.37) and (3.38) are variants of the proximal method for optimization problems introduced and studied by Martinet [44], Rockafellar [45], Ferris [46] and many others.…”