This work presents a hybrid approach to sign language synthesis. This approach allows hand-tuning the phonetic description of the signs, focusing on the time aspect of the sign. Therefore, we keep the capacity of performing morphonological operations, like the notation-based approaches and improving the synthetic signing performance, like the hand-tuned animations approach.Our approach simplifies input message description using a new high level notation and storing sign phonetic descriptions in a relational database. The relational database allows more flexible sign phonetic descriptions; it also allows describing sign timing and the synchronization between sign phonemes. The new notation, named HLSML, focuses on message description; it is a gloss-based notation. HLSML introduces several tags that allow modifying the signs in the message defining dialect and mood variations (both defined in the relational database) and message timing (transition durations and pauses). We also propose a new avatar design that simplifies the development of the synthesizer and avoids any interference in the independence of the sign language phonemes during the animation.The obtained results show an increase of the sign recognition rate compared to other approaches. This improvement is based on the active role that the sign language experts have in the description of signs, allowed by the flexibility of the sign storage approach. The approach will simplify the description of synthesizable signed messages so the creation of multimedia signed contents will be easier.