Limited ear dataset yields to the adaption of domain adaptive deep learning or transfer learning in the development of ear biometric recognition. Ear recognition is a variation of biometrics that is becoming popular in various areas of research due to the advantages of ears towards human identity recognition. In this paper, handpicked CNN architectures: AlexNet, GoogLeNet, Inception-v3, Inception-ResNet-v2, ResNet-18, ResNet-50, SqueezeNet, ShuffleNet, and MobileNet-v2 are explored and compared for use in an unconstrained ear biometric recognition. 250 unconstrained ear images are collected and acquired from the web through web crawlers and are preprocessed with basic image processing methods including the use of contrast limited adaptive histogram equalization for ear image quality improvement. Each CNN architecture is analyzed structurally and are fine-tuned to satisfy the requirements of ear recognition. Earlier layers of CNN architectures are used as feature extractors. Last 2-3 layers of each CNN architectures are fine-tuned thus, are replaced with layers of the same kind for ear recognition models to classify 10 classes of ears instead of 1000. 80 percent of acquired unconstrained ear images is used for training and the remaining 20 percent is reserved for testing and validation. Results of each architectures are compared in terms of their training time, training and validation outputs as such learned features and losses, and test results in terms of above-95% accuracy confidence. Above all the used architectures, ResNet, AlexNet, and GoogleNet achieved an accuracy confidence of 97-100% and is best for use in unconstrained ear biometric recognition while ShuffleNet, despite of achieving approximately 90%, shows promising result for use in mobile version of unconstrained ear biometric recognition.