Sleep position identification and monitoring is important in the context of certain healthcare conditions, such as obstructive sleep apnoea and epilepsy. Many studies have thoroughly investigated automatic sleep detection using various sensing channels located in optimum body locations. However, this has not been the case for detection using physiological data acquired from a single sensing channel on the neck. In certain healthcare contexts the neck can, however, be an attractive location despite being suboptimal for position monitoring; the reason being that it enables better extraction of more critical biomarkers from other sensing modalities, making possible multimodal monitoring using just one wearable. This work focuses on investigating methods of automatic sleep position detection using one wearable channel of accelerometry data sensed on the neck. Three different models are explored. These are based on: decision trees (DT), extra-trees classifier (ET) and long-short term memory neural networks (LSTM-NN). The paper also investigates for the first time what would be optimum design choices when considering that wearables are power and memory-constrained, but performance in the type of healthcare applications where a single location multimodal sensing is important must not be compromised. This includes looking into how changing the sampling rate and window sizes would affect the performance of the different models. It is demonstrated that a sampling rate as low as 5 Hz, and a window size as short as 1 second, still lead to high classification performance (around 0.945, 0.975 and 0.965 mean f1-score when using the DT, ET and LSTM-NN models, respectively, and at least 98% average accuracy in all three models); and that the DT model occupies the least memory space (1.765 KB) and takes the least mean prediction time across all window sizes (around 0.8 ms).