Through technologies such as RSS (Really Simple Syndication), Web Services, and AJAX (Asynchronous JavaScript And XML), the Internet has facilitated the emergence of applications that are composed from a variety of services and data sources. Through tools such as Yahoo Pipes, these "mash-ups" can be composed in a dynamic, just-in-time manner from components provided by multiple institutions (i.e. Google, Amazon, your neighbour). However, when using these applications, it is not apparent where data comes from or how it is processed. Thus, to inspire trust and confidence in mash-ups, it is critical to be able to analyse their processes after the fact. These trailing analyses, in particular the determination of the provenance of a result (i.e. the process that led to it), are enabled by process documentation, which is documentation of an application's past process created by the components of that application at execution time. In this paper, we define a generic conceptual data model that supports the autonomous creation of attributable, factual process documentation for dynamic multi-institutional applications. The data model is instantiated using two Internet formats, OWL and XML, and is evaluated with respect to questions about the provenance of results generated by a complex bioinformatics mash-up.