With the Internet of Things paradigm, the data generated by the rapidly increasing number of connected devices lead to new possibilities, such as using machine learning for activity recognition in smart environments. However, it also introduces several challenges. The sensors of different devices might be mobile and of different types, i.e. there is a need to handle streaming data from a dynamic and heterogeneous set of sensors. In machine learning, the performance is often linked to the availability and quality of annotated data. Annotating data is in general costly, but it can be even more challenging if there is not any, or a very small amount of, annotated data to train the model on at the start of learning. To handle these issues, we implement interactive and adaptive systems. By including human-in-the-loop, which we refer to as interactive machine learning, the input from users can be utilized to build the model. The type of input used in interactive machine learning is typically annotations of the data, i.e. correctly labelled data points. Generally, it is assumed that the user always provides correct labels in accordance with the chosen interactive learning strategy. In many real-world applications these assumptions are not realistic however, as users might provide incorrect labels or not provide labels at all in line with the chosen strategy. In this thesis we explore which interactive learning strategy types are possible in the given scenario and how they affect performance, as well as the effect of machine learning algorithms on the performance. We also study how a user who is not always reliable, i.e. who does not always provide a correct label when expected to, can affect performance. We propose a taxonomy of interactive online machine learning strategies and test how the different strategies affect performance through experiments on multiple datasets. Simulated experiments are compared to experiments with human participants, to verify the results. The findings show that the overall best performing interactive learning strategy is one where the user provides labels when current estimations are incorrect, but that the best performing machine learning algorithm depends on the problem scenario. The experiments also show that a decreased reliability of the user leads to decreased performance, especially when there is a limited amount of labelled data. The robustness of the machine learning algorithms differs, where e.g. Naïve Bayes classifier is better at handling a lower reliability of the user. We also present a systematic literature review on machine teaching, a subfield of interactive machine learning where the human is proactive in the interaction. The study shows that the area of machine teaching is rapidly evolving with an increased number of publications in recent years. However, as it is still maturing, there exists several open challenges that would benefit from further exploration, e.g. how human factors can affect performance.