The exponential growth of multimodal content in today's competitive business environment leads to a huge volume of unstructured data. Unstructured big data has no particular format or structure and can be in any form, such as text, audio, images, and video. In this paper, we address the challenges of emotion and sentiment modeling due to unstructured big data with different modalities. We first include an up-to-date review on emotion and sentiment modeling including the state-of-the-art techniques. We then propose a new architecture of multimodal emotion and sentiment modeling for big data. The proposed architecture consists of five essential modules: data collection module, multimodal data aggregation module, multimodal data feature extraction module, fusion and decision module, and application module. Novel feature extraction techniques called the divide-and-conquer principal component analysis (Div-ConPCA) and the divide-andconquer linear discriminant analysis (Div-ConLDA) are proposed for the multimodal data feature extraction module in the architecture. The experiments on a multicore machine architecture are performed to validate the performance of the proposed techniques.INDEX TERMS Big data, affective analytics, emotion recognition, sentiment modeling, unstructured data.