Itemset mining methods are techniques to discover relevant patterns in transactional databases. The first methods, called constrained-based pattern mining, are based on exhaustive pattern mining techniques which consists in returning all itemsets that satisfy a given constraint. The main issues that hinder their efficiency are the pattern explosion and the difficulty for a user to set the threshold value. To solve this problem, methods which return the most interesting patterns, called top-k, are also proposed, but they tend to a lack of diversity, a challenging issue for interactive pattern mining. However, interactive pattern mining is based on fast methods that respond effectively to user demand. To overcome all these problems, output pattern sampling is proposed in order to draw quickly a set of interesting patterns while guaranteeing a good diversity. Pattern sampling techniques are probabilistic methods that aim to draw a set of interesting patterns where each pattern is drawn with a probability proportional to a given interestingness measure. Nowadays, there are several measures that a user can test when interacting with the same database. In that case, the system should last a few time to take into account the new utility measures while guaranteeing an exact draw. So, the cost in time of utility change can be a real problem of output pattern sampling techniques in large databases. In addition, with the current sampling methods, it is necessary to store all data in memory and this storage is prohibitive for large data. To solve these problems, this paper deals with how to structure the data for the purpose of output pattern sampling under length-based utility measure in large transactional databases. So, we revisit the trie structure initially proposed by D. Knuth to enrich it and then no longer have need (i) to access the data to sample because the patterns will be directly taken from the enriched trie, (ii) to reprocess the entire dataset when utility changes. The computation of the value of a length-based utility measure is based on the lengths of all the patterns that are present in the database. So, we define a new structure of trie called trie of occurrences, built by our first algorithm TPSpace (Trie-based Pattern Space), which materializes all the occurrences of the patterns in the database. The data compression comes from a factorization of the information via the prefixes of the patterns. The particularly remarkable result is that, by definition, the trie of occurrences is the same for any length-based utility measure provided that the same values are kept for the minimum and maximum length constraints. We then describe TPSampling (Trie-based Pattern Sampling) which performs the sampling by drawing patterns according to a length-based utility measure from the trie of occurrences. This paper is completed by the complexity analysis in memory and in time of the method and experiments on benchmark datasets. TPSampling is competitive with the two-step approach to sample following a given interestingness measure but, as expected, it is more particularly advantageous if several utility measures are used thanks to the generic preprocessing. TPSampling is $10^5$ times faster than Two-Step for reprocessing in utility change.