There are many challenges in maintaining the desired quality of service levels in modern microservice and cloud applications. Numerous techniques and patterns, such as API Rate Limit, Load Balancing, and Request Bundle, have been suggested for API services and clients to improve quality properties related to performance and reliability. However, no study has measured the impact of these techniques and their combinations in a specific configuration, especially using a large distributed system workload setting. This paper experimentally studies the effects of combining the API Rate Limit, Load Balancing, and Request Bundle patterns based on a realistic, third-party microservice-based application deployed in a private cloud and on the Amazon Web Services cloud (AWS) using 130 different configurations. We have run each configuration 500 times in the private cloud, totaling more than 4500 hours of runtime, and 200 times on AWS, totaling more than 3900 hours of runtime. We developed regression models from the collected data to predict the performance and reliability impacts of combining such techniques and patterns. We found that the models provide acceptable prediction errors below 30% on the private cloud and AWS. Further, we found that the models work best in highly reliable environments like AWS. In addition to the concrete analyses provided in our work, we propose a general and largely automated method that can be followed iteratively to evaluate similar techniques and patterns for their quality properties.