Answering binary causal questions is a challenging task, and it requires rich background knowledge to answer such questions. Extracting useful causal features from the background knowledge base and applying them effectively in a model is a crucial step to answering binary causal questions. The state-of-theart approaches apply deep learning techniques to answer binary causal questions. In these approaches, candidate concepts are often embedded into vectors to model causal relationships among them. However, a concept may play the role of a cause in one question, but it could be an effect in another question. This aspect has not been extensively explored in existing approaches. Roleoriented causal concept embeddings are proposed in this paper to model causality between concepts. We also propose leveraging semantic concept similarity to extract causal information from concepts. Finally, we develop a deep learning framework to answer binary causal questions. Our approach yields accuracy that is comparable to or better than the benchmark approaches.Impact Statement-Understanding causality is crucial for automatic question-answering systems, which are useful in extracting and distributing human knowledge. An automatic question-answering system with causal knowledge can be used to check whether there is causal relationship between two concepts.Existing approaches to answer binary causal questions often answer such questions with close to 55% accuracy due to the limited usage of causal and contextual features. The deep learning framework we propose in this paper uses a role-oriented concept embedding to address such issues. Our approach achieves better accuracy by up to 3.6%, compared to the state-of-the-art benchmark approaches. The proposed approach can be used in a variety of fields, including prescriptive analysis, event prediction, and any other area where entity relationships are essential. It could also be used to improve the retrieval of causality-related inquiries in web search engines.