The rapid growth of Internet of Things (IoT) devices has significantly increased reliance on sensor-generated data, which are essential to a wide range of systems and services. Wireless sensor networks (WSNs), crucial to this ecosystem, are often deployed in diverse and challenging environments, making them susceptible to faults such as software bugs, communication breakdowns, and hardware malfunctions. These issues can compromise data accuracy, stability, and reliability, ultimately jeopardizing system security. While advanced sensor fault detection methods in WSNs leverage a machine learning approach to achieve high accuracy, they typically rely on centralized learning, and face scalability and privacy challenges, especially when transferring large volumes of data. In our experimental setup, we employ a decentralized approach using federated learning with long short-term memory (FedLSTM) for sensor fault detection in WSNs, thereby preserving client privacy. This study utilizes temperature data enhanced with synthetic sensor data to simulate various common sensor faults: bias, drift, spike, erratic, stuck, and data-loss. We evaluate the performance of FedLSTM against the centralized approach based on accuracy, precision, sensitivity, and F1-score. Additionally, we analyze the impacts of varying the client participation rates and the number of local training epochs. In federated learning environments, comparative analysis with established models like the one-dimensional convolutional neural network and multilayer perceptron demonstrate the promising results of FedLSTM in maintaining client privacy while reducing communication overheads and the server load.