A fundamental scheduling problem is to schedule a set of jobs on a set of machines so that as many jobs as possible can be scheduled while respecting the resource constraints on each machine. A further complication in many computer systems is that scheduling decisions must be made on-line, that is, as soon as job arrives it must either be scheduled or rejected. Many practical systems and algorithms schedule jobs in such a way that loads on machines tend to be balanced. However, in case when job requests are highly variable, such scheduling systems do not necessarily result in a good performance. If loads are balanced then when large jobs arrive, there may not be any "holes" large enough to hold them, thereby resulting in increased queueing delays and overall job sojourn times in the system. In this paper, we focus on addressing the job assignment problem in an on-line loss system framework, i.e., when a job arrives, it gets scheduled to a particular machine using a certain scheduling policy or, if there are not enough available resources on any of the machines, the incoming job is lost. This type of scheduling is often called Non-Forced Idle Time Scheduling and includes many common policies, including Best Fit, First Fit and Worst Fit.In this paper, we use compound point processes to capture stochastic variability in the request process, we derive simple asymptotic estimates for the job loss rate in these scheduling systems. Furthermore, we derive the asymptotic lower bound for the job loss rate by comparing it to the performance of a single large machine and derive an asympotic average case competitive ratio for the class of analyzed Non-forced Idle Time Scheduling policies. Although our proofs are asymptotic, we perform experiments which show an excellent match between simulation results and theoretical performance bounds, even for relatively small resource capacities and large values of measured loss rates.