The growing usage of digital microphones has generated an increased interest in the topic of Acoustic Anomaly Detection (AAD). Indeed, there are several real-world AAD application domains, including working machines and in-vehicle intelligence (the main target of this research project). This paper introduces three deep AutoEncoders (AE) for unsupervised AAD tasks, namely a Dense AE, a Convolutional Neural Network (CNN) AE and Long Short-Term Memory Autoencoder (LSTM) AE. To tune the deep learning architectures, development data was adopted from public domain audio datasets related with working machines. A large set of computational experiments was held, showing that the three proposed deep autoencoders, when combined with a melspectrogram sound preprocessing, are quite competitive and outperform a recently proposed AE baseline. Next, on a second experimental stage, aiming to address the final in-vehicle passenger safety goal, the three AEs were adapted to learn from in-vehicle normal audio, assuming three realistic scenarios that were generated by a Deep Autoencoders for Acoustic Anomaly Detection synthetic audio mixture tool. In general, a high quality AAD discrimination was obtained: working machine data -72% to 91%; and in-vehicle audio -78% to 81%. In conjunction with an automotive company, an in-vehicle AAD intelligent system prototype was further developed, aiming to test a selected model (LSTM AE) during a pilot demonstration event that targeted the cough anomaly. Interesting results were obtained, with the AAD system presenting a high cough classification accuracy (e.g., 100% for front seat locations).