Mocking in the context of automated software tests allows testing program units in isolation. Designing realistic interactions between a unit and its environment, and understanding the expected impact of these interactions on the behavior of the unit, are two key challenges that software testers face when developing tests with mocks. In this paper, we propose to monitor an application in production to generate tests that mimic realistic execution scenarios through mocks. Our approach operates in three phases. First, we instrument a set of target methods for which we want to generate tests, as well as the methods that they invoke, which we refer to mockable method calls. Second, in production, we collect data about the context in which target methods are invoked, as well as the parameters and the returned value for each mockable method call. Third, offline, we analyze the production data to generate test cases with realistic inputs and mock interactions. The approach is automated and implemented in an open-source tool called RICK. We evaluate our approach with three real-world, open-source Java applications. RICK monitors the invocation of 128 methods in production across the three applications and captures their behavior. Next, RICK analyzes the production observations in order to generate test cases that include rich initial states and test inputs, mocks and stubs that recreate actual interactions between the method and its environment, as well as mock-based oracles. All the test cases are executable, and 52.4% of them successfully mimic the complete execution context of the target methods observed in production. We interview 5 developers from the industry who confirm the relevance of using production observations to design mocks and stubs.