In next-generation wireless systems ultra-reliable low-latency communications (URLLC) have to be realized in support of the Internet-of-things (IoT). However, this is quite a challenge in the face of channel estimation errors (CEEs) and hardware imperfections (HWIs). Hence, we consider a variety of realistic HWIs as well as CEEs and evaluate the average block error rate (BLER) of a short-packet based cooperative amplifyand-forward (AF) relaying network. Our simulation and analytical results reveal that as expected, both the CEEs and HWIs lead to a substantial average BLER degradation. In particular, CEEs lead to a BLER floor, which is further aggravated by the increase in HWIs. A high level of HWIs results in a BLER tending to one, regardless of the transmit power. Furthermore, it is shown that the CEEs and HWIs gravely degrade the energy efficiency of the idealized perfect scenario. This observation shows that both the CEEs and HWIs constitute critical issues in the design of energy efficient communication systems. However, through solving the optimization problems formulated, we conceived several schemes for mitigating the degradations imposed by the CEEs and HWIs. e.g. in our adaptive-duration training scheme the total CEE of the hops is shared in line with the current status of the two channels. Our simulation results show that this scheme significantly reduces the BLER and mitigates the deleterious effects of CEE.