User generated content is gradually being recognized for its remarkable potential to enrich the professionally broadcasted content, but also as the means to provide acceptable quality audiovisual content for public events where professional coverage is absent. This potential is particularly interesting with respect to the audio modality, as a multitude of temporally overlapping User Generated audio Recordings (UGRs) may be utilized in order to provide a multichannel recording of the captured acoustic event. In this paper, we formulate a simple audio mixing approach called Maximum Component Elimination (MCE) to process a multiplicity of synchronized UGRs in a collaborative fashion. Operating in the Time-Frequency (TF) domain, MCE relies on the use of binary weights in order to selectively prevent certain TF components from individual UGRs to enter in the final mix. Results from a listening test indicate that the proposed mechanism is very efficient in suppressing foreground speech interference, removing inappropriate content from the audio mix and concealing the identities of individuals whose voices are unintentionally captured by the recording devices. Furthermore, it is shown that audio mixtures produced with MCE improve the user experience compared to the more classical use case where each UGR is consumed individually.