Current main memory database system architectures are still challenged by high contention workloads and this challenge will continue to grow as the number of cores in processors continues to increase [35]. These systems schedule transactions randomly across cores to maximize concurrency and to produce a uniform load across cores. Scheduling never considers potential conflicts. Performance could be improved if scheduling balanced between concurrency to maximize throughput and scheduling transactions linearly to avoid conflicts. In this paper, we present the design of several intelligent transaction scheduling algorithms that consider both potential transaction conflicts and concurrency.To incorporate reasoning about transaction conflicts, we develop a supervised machine learning model that estimates the probability of conflict. This model is incorporated into several scheduling algorithms. In addition, we integrate an unsupervised machine learning algorithm into an intelligent scheduling algorithm. We then empirically measure the performance impact of different scheduling algorithms on OLTP and social networking workloads. Our results show that, with appropriate settings, intelligent scheduling can increase throughput by 54% and reduce abort rate by 80% on a 20-core machine, relative to random scheduling. In summary, the paper provides preliminary evidence that intelligent scheduling significantly improves DBMS performance.Transaction aborts are one of the main sources of performance loss in main memory OLTP systems [35].Current architectures for OLTP DBMS use random scheduling to assign transactions to threads. Random scheduling achieves uniform load across CPU cores and keeps all cores occupied. For workloads with a high abort rate, a large portion of work done by CPU is wasted. In contrast, the abort rate drops to zero if all transactions are scheduled sequentially into one thread. No work is wasted through aborts, but the throughput drops to the performance of a single hardware thread. Research has shown that statistical scheduling of transactions using a history can achieve low abort rate and high throughput [37] for partitionable workloads. We propose a more systematic machine learning approach to schedule transactions that achieves low abort rate and high throughput for both partitionable and non-partitionable workloads. 2 Y. Sheng, et al.The fundamental intuitions of this paper are that (i) the probability that a transactions will abort will high probability can be modeled through machine learning, and (ii) given that an abort is predicted, scheduling can avoid the conflict without loss of response time or throughput. Several research works [1,20,25,27] perform exact analyses of aborts of transaction statements that are complex and not easily generalizable over different workloads or DBMS architectures. Our approach uses machine learning algorithms to model the probability of aborts. Our approach is to (i) build a machine learning model that helps to group transactions that are likely to abort with each ot...