This paper addresses the problem of designing and computing a tractable index policy for dynamic job admission control and/or routing in a discrete time Markovian model of parallel loss queues with one-period delayed state observation, which comes close to optimizing an infinite-horizon discounted or average performance objective involving linear holding costs and rejection costs. Instead of devising some ad hoc indices, we deploy a unifying fundamental design principle for design of priority index policies in dynamic resource allocation problems of multiarmed restless bandit type, based on decoupling the problem into subproblems and defining an appropriate marginal productivity index (MPI) for each subproblem. In the model of concern, such subproblems represent admission control problems to a single queue with one-period feedback delay, for which the structure of optimal policies has been characterized in previous work as being of bi-threshold type, yet without giving an algorithm to compute the optimal thresholds. We deploy in such subproblems theoretical and algorithmic results on restless bandit indexation, which yields a fast algorithm that computes the MPI for a subproblem with a buffer size of n performing only O(n) arithmetic operations. Such MPI values can be used both to immediately obtain the optimal thresholds for the subproblem, and to design an index policy for the admission control and/or routing problem in the multi-queue system. The results readily extend to models with infinite buffer space.