We consider a network of periodically running railway lines. Investments are possible to increase the speed and to improve the synchronisation of trains. The model also includes random delays of trains and the propagation of delays across the network. We derive a cost‐benefit analysis of investments, where the benefit is measured in reduced waiting time for passengers changing lines. We also estimate the actual mean waiting time simulating the train delays. This allows us to analyse the impact that an increasing synchronisation of the timetable has on its stability. Simulation is based on an analytical model obtained from queueing theory. We use sophisticated adaptive evolutionary algorithms, which send off avant‐garde solutions from time to time to speed up the optimisation process. As there is a high correlation between scheduled and estimated waiting times for badly synchronised timetables, we are even able to include the time consuming simulation into our optimisation runs.
We present a simple algorithm that allows sampling from a stream of data items without knowing the number of items in advance and without having to store all items in main memory. The sampling distribution may be general, that is, the probability of selecting a data item
i
may depend on the individual item. The main advantage of the algorithms is that they have to pass through the data items only once to produce a sample of arbitrary size
n
.We give different variants of the algorithm for sampling with and without replacement and analyze their complexity. We generalize earlier results of Knuth on reservoir sampling with a uniform sampling distribution. The general distribution considered here allows us to sample an item with a probability equal to the relative weight (or fitness) of the data item within the whole set of items. Applications include heuristic optimization procedures such as genetic algorithms where solutions are sampled from a population with probability proportional to their fitness.
The discrete cross-entropy optimization algorithm iteratively samples solutions according to a probability density on the solution space. The density is adapted to the good solutions observed in the present sample before producing the next sample. The adaptation is controlled by a so-called smoothing parameter. We generalize this model by introducing a flexible concept of feasibility and desirability into the sampling process. In this way, our model covers several other optimization procedures, in particular the ant-based algorithms. The focus of this paper is on some theoretical properties of these algorithms. We examine the first hitting time τ of an optimal solution and give conditions on the smoothing parameter for τ to be finite with probability one. For a simple test case we show that runtime can be polynomially bounded in the problem size with a probability converging to 1. We then investigate the convergence of the underlying density and of the sampling process. We show, in particular, that a constant smoothing parameter, as it is often used, makes the sample process converge in finite time, freezing the optimization at a single solution that need not be optimal. Moreover, we define a smoothing sequence that makes the density converge without freezing the sample process and that still guarantees the reachability of optimal solutions in finite time.This settles an open question from the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.