We report lessons from iteratively developing a music recognition system to enable a wide range of musicians to embed musical codes into their typical performance practice. The musician composes fragments of music that can be played back with varying levels of embellishment, disguise and looseness to trigger digital interactions. We collaborated with twenty-three musicians, spanning professionals to amateurs and working with a variety of instruments. We chart the rapid evolution of the system to meet their needs as they strove to integrate music recognition technology into their performance practice, introducing multiple features to enable them to trade-off reliability with musical expression. Collectively, these support the idea of deliberately introducing 'looseness' into interactive systems by addressing the three key challenges of control, feedback and attunement, and highlight the potential role for written notations in other recognition-based systems.