Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.
Visual Question Answering (VQA) has attracted much attention recently in both
natural language processing and computer vision communities, as it offers
insight into the relationships between two relevant sources of information.
Tremendous advances are seen in the field of VQA due to the success of deep
learning. Based upon advances and improvements, the Affective Visual Question
Answering Network (AVQAN) enriches the understanding and analysis of VQA
models by making use of the emotional information contained in the images to
produce sensitive answers, while maintaining the same level of accuracy as
ordinary VQA baseline models. It is a reasonably new task to integrate the
emotional information contained in the images into VQA. However, it is
challenging to separate question guided-attention from mood-guided-attention
due to the concatenation of the question words and the mood labels in AVQAN.
Also, it is believed that this type of concatenation is harmful to the
performance of the model. To mitigate such an effect, we propose the
Double-Layer Affective Visual Question Answering Network (DAVQAN) that
divides the task of generating emotional answers in VQA into two simpler
subtasks: the generation of non-emotional responses and the production of
mood labels, and two independent layers are utilized to tackle these
subtasks. Comparative experimentation conducted on a preprocessed dataset to
performance comparison shows that the overall performance of DAVQAN is 7.6%
higher than AVQAN, demonstrating the effectiveness of the proposed model. We
also introduce more advanced word embedding method and more fine-grained
image feature extractor into AVQAN and DAVQAN to further improve their
performance and obtain better results than their original models, which
proves that VQA integrated with affective computing can improve the
performance of the whole model by improving these two modules just like the
general VQA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.