Blade Runner-Autoencoded' is a film made by training an autoencoder-a type of generative neural network-to recreate frames from the film Blade Runner. The autoencoder is made to reinterpret every individual frame, reconstructing it based on its memory of the film. The result is a hazy, dreamlike version of the original film. The project explores the aesthetic qualities of the disembodied gaze of the neural network. The autoencoder is also capable of representing images from films it has not seen based on what it has learned from watching Blade Runner. Introduction Reconstructing videos based on prior visual information has some scientific and artistic precedents. Casey and Grierson [1] present a system for real-time matching of an audio input stream to a database of continuous audio or video, presenting an application called REMIX-TV. Grierson develops on this work with PLUNDERMATICS [2], adding more sophisticated methods for feature extraction, segmentation and filtering. Mital, Grierson, and Smith [3] extend this approach further to synthesis a target image using a corpus of images. The image is synthesised in fragments that are matched from the database extracted from the corpus based on shape and colour similarity. Mital uses this technique to create a series of artworks called 'YouTube Smash Up' [4], synthesising the week's most popular video on YouTube from fragments of other trending videos on the platform. Another, somewhat related approach (and key influence to this project) is the research in reconstructing what people are watching while in an MRI scanner, solely from recorded brain scans [5].