A plethora of sensors embedded in wearable, mobile, and infrastructure devices allow us to seamlessly capture large parts of our daily activities and experiences. It is not hard to imagine that such data could be used to support human memory in the form of automatically generated memory cues, e.g., images, that help us remember past events. Such a vision of pervasive "memory-augmentation systems", however, comes with significant privacy and security implications, chief among them the threat of memory manipulation: without strong guarantees about the provenance of captured data, attackers would be able to manipulate our memories by deliberately injecting, removing, or modifying captured data. This work introduces this novel threat of human memory manipulation in memory augmentation systems. We then present a practical approach that addresses key memory manipulation threats by securing the captured memory streams. Finally we report evaluation results on a prototypical secure camera platform that we built.