“…This requires adaptation data for all possible operating conditions. An alternative approach, acoustic factorisation first proposed in 2001 [1], has been adopted by a number of sites very recently e.g., [2,3,4,5,6]. In parallel with the factorisation approach in speech recognition, there is also work along this line in speech synthesis, e.g., [7,8], where the goal is to synthesis the effect of multiple factors, such as speaker, language and emotion.…”