Limits...
Analogy, explanation, and proof.

Hummel JE, Licato J, Bringsjord S - Front Hum Neurosci (2014)

Bottom Line: This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation.We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint.Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, University of Illinois Urbana-Champaign, IL, USA.

ABSTRACT
People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

No MeSH data available.


LISA representation of the cause-effect relation: Entity 1 believes proposition p [believe (e1, p)], entity 2 believes proposition p [believe (e2, p)], and these facts jointly cause e1 to agree with e2 [agree (e1, e2)]. To represent that believe (e1, p) and believe (e2, p) jointly cause something, the units representing these propositions (left-most ovals) share bi-directional excitatory connections to a unit (left-most diamond) representing a cause group. To represent that agree (e1, e2) is the effect of something, the unit representing that proposition (right-most oval) shares a bi-directional excitatory connection to a unit (right-most diamond) representing an effect group. To represent that the cause on the left is the cause of the effect on the right, the corresponding cause and effect groups share bi-directional excitatory connections with a unit (upper-most diamond) representing a cause-effect (CE) group. Connections between the group units and their respective cause, effect, and CE semantic units are not shown.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4222223&req=5

Figure 2: LISA representation of the cause-effect relation: Entity 1 believes proposition p [believe (e1, p)], entity 2 believes proposition p [believe (e2, p)], and these facts jointly cause e1 to agree with e2 [agree (e1, e2)]. To represent that believe (e1, p) and believe (e2, p) jointly cause something, the units representing these propositions (left-most ovals) share bi-directional excitatory connections to a unit (left-most diamond) representing a cause group. To represent that agree (e1, e2) is the effect of something, the unit representing that proposition (right-most oval) shares a bi-directional excitatory connection to a unit (right-most diamond) representing an effect group. To represent that the cause on the left is the cause of the effect on the right, the corresponding cause and effect groups share bi-directional excitatory connections with a unit (upper-most diamond) representing a cause-effect (CE) group. Connections between the group units and their respective cause, effect, and CE semantic units are not shown.

Mentions: We propose a third alternative: To represent groups of related propositions by connecting them to group units (Hummel et al., 2008). For example, the fact that P1 and P2 in the agreement schema (the believe relations) jointly cause something can be represented by connecting P1 and P2 to a single group unit, and tagging that group as a cause by connecting it to semantic units representing cause (see Figure 2). Likewise, the fact that P3 is an effect can be represented by connecting it to a group unit, and connecting that unit to semantic units representing effect. Finally, the fact that the P1/P2 group is the cause of P3 can be represented by connecting the cause and effect groups to a higher-level cause-effect (CE) group unit. This latter unit represents the strength of the causal relation by connecting to semantic units coding for that strength.


Analogy, explanation, and proof.

Hummel JE, Licato J, Bringsjord S - Front Hum Neurosci (2014)

LISA representation of the cause-effect relation: Entity 1 believes proposition p [believe (e1, p)], entity 2 believes proposition p [believe (e2, p)], and these facts jointly cause e1 to agree with e2 [agree (e1, e2)]. To represent that believe (e1, p) and believe (e2, p) jointly cause something, the units representing these propositions (left-most ovals) share bi-directional excitatory connections to a unit (left-most diamond) representing a cause group. To represent that agree (e1, e2) is the effect of something, the unit representing that proposition (right-most oval) shares a bi-directional excitatory connection to a unit (right-most diamond) representing an effect group. To represent that the cause on the left is the cause of the effect on the right, the corresponding cause and effect groups share bi-directional excitatory connections with a unit (upper-most diamond) representing a cause-effect (CE) group. Connections between the group units and their respective cause, effect, and CE semantic units are not shown.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4222223&req=5

Figure 2: LISA representation of the cause-effect relation: Entity 1 believes proposition p [believe (e1, p)], entity 2 believes proposition p [believe (e2, p)], and these facts jointly cause e1 to agree with e2 [agree (e1, e2)]. To represent that believe (e1, p) and believe (e2, p) jointly cause something, the units representing these propositions (left-most ovals) share bi-directional excitatory connections to a unit (left-most diamond) representing a cause group. To represent that agree (e1, e2) is the effect of something, the unit representing that proposition (right-most oval) shares a bi-directional excitatory connection to a unit (right-most diamond) representing an effect group. To represent that the cause on the left is the cause of the effect on the right, the corresponding cause and effect groups share bi-directional excitatory connections with a unit (upper-most diamond) representing a cause-effect (CE) group. Connections between the group units and their respective cause, effect, and CE semantic units are not shown.
Mentions: We propose a third alternative: To represent groups of related propositions by connecting them to group units (Hummel et al., 2008). For example, the fact that P1 and P2 in the agreement schema (the believe relations) jointly cause something can be represented by connecting P1 and P2 to a single group unit, and tagging that group as a cause by connecting it to semantic units representing cause (see Figure 2). Likewise, the fact that P3 is an effect can be represented by connecting it to a group unit, and connecting that unit to semantic units representing effect. Finally, the fact that the P1/P2 group is the cause of P3 can be represented by connecting the cause and effect groups to a higher-level cause-effect (CE) group unit. This latter unit represents the strength of the causal relation by connecting to semantic units coding for that strength.

Bottom Line: This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation.We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint.Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, University of Illinois Urbana-Champaign, IL, USA.

ABSTRACT
People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

No MeSH data available.