Abstract:Based on our previous researchs about generalized modus ponens (GMP) with linguistic modifiers for If … Then rules, this paper proposes new generalized modus tollens (GMT) inference rules with linguistic modifiers in linguistic many-valued logic framework with using hedge moving rules for inverse approximate reasoning.
“…In [26], the authors show that, hedges of Mono-HA are "context-free", i.e., a hedge adjusts the meaning of a linguistic value independently of prior hedges in the string of hedge.…”
Section: Examplementioning
confidence: 99%
“…In practice, humans only use linguistic values with a finite length of modifier for the vague concepts, i.e., humans only use a finite string of hedges for truth values [26]. This leads to the necessity to limit the hedge string's length in the truth value domain to make it not surpass L-any positive number [26]. Based on Mono-HA, we set finite monotonous hedge algebra to make linguistic truth value domain.…”
“…The rules of generalized Modus ponens with linguistic modifiers (GMPLM) have been introduced in [21], [26]. Given 𝛼, 𝛽, 𝛿, 𝜎, 𝜃, 𝜕, 𝛼′, 𝛽′, 𝛿′, 𝜃′, and 𝜕′ are the hedge strings; get 𝛼 = ℎ 1 ℎ 2 .…”
Section: Generalized Modus Ponens With Linguistic Modifiersmentioning
There are various types of multi-attribute decision-making (MADM) problems in our daily lives and decision-making problems under uncertain environments with vague and imprecise information involved. Therefore, linguistic multi-attribute decision-making problems are an important type studied extensively. Besides, it is easier for decision-makers to use linguistic terms to evaluate/choose among alternatives in real life. Based on the theoretical foundation of the Hedge algebra and linguistic many-valued logic, this study aims to address multi-attribute decision-making problems by linguistic valued qualitative aggregation and reasoning method. In this paper, we construct a finite monotonous Hedge algebra for modeling the linguistic information related to MADM problems and use linguistic many-valued logic for deducing the outcome of decision making. Our method computes directly on linguistic terms without numerical approximation. This method takes advantage of linguistic information processing and shows the benefit of Hedge algebra.
“…In [26], the authors show that, hedges of Mono-HA are "context-free", i.e., a hedge adjusts the meaning of a linguistic value independently of prior hedges in the string of hedge.…”
Section: Examplementioning
confidence: 99%
“…In practice, humans only use linguistic values with a finite length of modifier for the vague concepts, i.e., humans only use a finite string of hedges for truth values [26]. This leads to the necessity to limit the hedge string's length in the truth value domain to make it not surpass L-any positive number [26]. Based on Mono-HA, we set finite monotonous hedge algebra to make linguistic truth value domain.…”
“…The rules of generalized Modus ponens with linguistic modifiers (GMPLM) have been introduced in [21], [26]. Given 𝛼, 𝛽, 𝛿, 𝜎, 𝜃, 𝜕, 𝛼′, 𝛽′, 𝛿′, 𝜃′, and 𝜕′ are the hedge strings; get 𝛼 = ℎ 1 ℎ 2 .…”
Section: Generalized Modus Ponens With Linguistic Modifiersmentioning
There are various types of multi-attribute decision-making (MADM) problems in our daily lives and decision-making problems under uncertain environments with vague and imprecise information involved. Therefore, linguistic multi-attribute decision-making problems are an important type studied extensively. Besides, it is easier for decision-makers to use linguistic terms to evaluate/choose among alternatives in real life. Based on the theoretical foundation of the Hedge algebra and linguistic many-valued logic, this study aims to address multi-attribute decision-making problems by linguistic valued qualitative aggregation and reasoning method. In this paper, we construct a finite monotonous Hedge algebra for modeling the linguistic information related to MADM problems and use linguistic many-valued logic for deducing the outcome of decision making. Our method computes directly on linguistic terms without numerical approximation. This method takes advantage of linguistic information processing and shows the benefit of Hedge algebra.
“…For example the issue of polarity (affirmative vs. negative terms) is not easy to tackle because linguistic negation is much more complicated than logical negation [8][9][10]. Negation, as a unique feature of human communication [11], has also been addressed from a linguistic, logical and psycholinguistic point of view [12][13][14]. From Aristotle's original square of opposition (where A and E are contraries, I and O are sub-contraries and A and O as well as E and I are contradictories, see Table 1) to Greimas' semiotic square [15], there still remains several ways to consider the linguistic negation.…”
In this paper we focus on linguistic negation, i.e. the way to handle negative statements while making a decision. Indeed negation is an important issue in decision making, especially when data are provided by human beings where vagueness, ignorance and uncertainty are high. This article highlights the existing works about negation in semiotics and semantics. It is an attempt to give clues to model linguistic negation while building bridges between the 2-tuple fuzzy linguistic representation model and semantics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.