We extend the Faulty RAM model by Finocchi and Italiano (2008) by adding a safe memory of arbitrary size S, and we then derive tradeoffs between the performance of resilient algorithmic techniques and the size of the safe memory. Let δ and α denote, respectively, the maximum amount of faults which can happen during the execution of an algorithm and the actual number of occurred faults, with α ≤ δ. We propose a resilient algorithm for sorting n entries which requires O (n log n + α(δ/S + log S)) time and uses Θ (S) safe memory words. Our algorithm outperforms previous resilient sorting algorithms which do not exploit the available safe memory and require O (n log n + αδ) time. Finally, we exploit our sorting algorithm for deriving a resilient priority queue. Our implementation uses Θ (S) safe memory words and Θ (n) faulty memory words for storing n keys, and requires O (log n + δ/S) amortized time for each insert and deletemin operation. Our resilient priority queue improves the O (log n + δ) amortized time required by the state of the art.