Cloud based tiered applications are increasingly becoming popular, be it on phones or on desktops. End users of these applications range from novice to expert depending on how experienced they are in using them. With repeated usage (practice) of an application, a user's think time gradually decreases, known as learning phenomenon. In contrast to the popular notion of constant mean think time of users across all practice sessions, decrease in mean think time over practice sessions does occur due to learning. This decrease gives rise to a different system workload thereby affecting the application's short-term performance. However, such impact of learning on performance has never been accounted for. In this work we propose a model that accounts for human learning behavior in analyzing the transient (short-term) performance of a 3-tier cloud based application. Our approach is based on a closed queueing network model. We solve the model using discrete event simulation. In addition to the overall mean System Response Time (SRT), our model solution also generates the mean SRTs for various types (novice, intermediate, expert) of requests submitted by users at various levels of their expertise. We demonstrate that our model can be used to evaluate various what-if scenarios to decide the number of VMs we need for each tier-a VM configuration-that would meet the response time SLA. The results show that the lack of accountability of learning may lead to a selection of an inappropriate VM configuration. The results further show that the mean SRTs for various types of requests are better measures to consider in VM allocation process in comparison to the overall mean SRT.