<p><strong>In the past five years, language models have dramatically improved in generating human passable content, high-level reasoning, and factual recollection. In particular, recent developments made by OpenAI with ChatGPT have seen a dramatic increase in the accessibility of interacting with language models.</strong></p><p>However, the tendency for language models to make up facts or hallucinate proves a severe limitation in terms of both safety and efficacy. This research explores how we can effectively utilise AI assistance based on generative language models and focuses on the ethical implications of these systems.</p><p>A series of four experiments were conducted under the Critical Making methodology to emphasise the exploration of the ethical implications of generative language models. The results of these experiments indicate the prevalence of hallucinations in current state-of-the-art language models and how vector databases can be utilised to prevent these hallucinations, reduce biases, and better align language models to our intended values.</p><p>This thesis suggests that AI should be used not to replace humans but as a tool to empower humans, acknowledging the potential challenges involved while proposing strategies and recommendations for the ethical use of AI technologies.</p>