Limits...
Varying variation: the effects of within- versus across-feature differences on relational category learning.

Livins KA, Spivey MJ, Doumas LA - Front Psychol (2015)

Bottom Line: As a result, the way that they interact with feature variation is unclear.Both sets of results indicated that learning a relational category from a training set with a lower amount of variation is easier, but that learning from a training set with increased within-feature variation is significantly less challenging than learning from a set with increased across-feature variation.These results support the claim that, like feature-based category-learning, relational category-learning is sensitive to the type of feature variation in the training set.

View Article: PubMed Central - PubMed

Affiliation: Department of Cognitive Science, University of California, Merced, Merced, CA USA.

ABSTRACT
Learning of feature-based categories is known to interact with feature-variation in a variety of ways, depending on the type of variation (e.g., Markman and Maddox, 2003). However, relational categories are distinct from feature-based categories in that they determine membership based on structural similarities. As a result, the way that they interact with feature variation is unclear. This paper explores both experimental and computational data and argues that, despite its reliance on structural factors, relational category-learning should still be affected by the type of feature variation present during the learning process. It specifically suggests that within-feature and across-feature variation should produce different learning trajectories due to a difference in representational cost. The paper then uses the DORA model (Doumas et al., 2008) to discuss how this account might function in a cognitive system before presenting an experiment aimed at testing this account. The experiment was a relational category-learning task and was run on human participants and then simulated in DORA. Both sets of results indicated that learning a relational category from a training set with a lower amount of variation is easier, but that learning from a training set with increased within-feature variation is significantly less challenging than learning from a set with increased across-feature variation. These results support the claim that, like feature-based category-learning, relational category-learning is sensitive to the type of feature variation in the training set.

No MeSH data available.


An example of a hypothetical training set involving a base variation set (A), a within-feature variation set (B), and an across-feature variation set (C).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4321646&req=5

Figure 2: An example of a hypothetical training set involving a base variation set (A), a within-feature variation set (B), and an across-feature variation set (C).

Mentions: For instance, imagine that one needs to learn a relational category that involves the relative spatial locations of shapes such that membership is defined by whether one shape occludes the other at the top right of the occluded shape. Further imagine that while these shapes can be circles or squares, the actual shapes (i.e., which shape is in which location) is non-predictive of category membership – only the relational location structure is of importance (see Figure 2A for an example of what a training set for this category might look like). In this case, shape is a varying feature that is unimportant to category membership. Increased within-feature variation might mean increasing the number of shapes that could be used in the exemplars (so, instead of just circles and squares, the shapes might be circles, squares, or triangles; see Figure 2B). Alternatively, increased across-feature variation might be achieved by increasing the overall number of features that varied across exemplars. Thus, the shapes might vary, but now their colors might vary also (see Figure 2C).


Varying variation: the effects of within- versus across-feature differences on relational category learning.

Livins KA, Spivey MJ, Doumas LA - Front Psychol (2015)

An example of a hypothetical training set involving a base variation set (A), a within-feature variation set (B), and an across-feature variation set (C).
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4321646&req=5

Figure 2: An example of a hypothetical training set involving a base variation set (A), a within-feature variation set (B), and an across-feature variation set (C).
Mentions: For instance, imagine that one needs to learn a relational category that involves the relative spatial locations of shapes such that membership is defined by whether one shape occludes the other at the top right of the occluded shape. Further imagine that while these shapes can be circles or squares, the actual shapes (i.e., which shape is in which location) is non-predictive of category membership – only the relational location structure is of importance (see Figure 2A for an example of what a training set for this category might look like). In this case, shape is a varying feature that is unimportant to category membership. Increased within-feature variation might mean increasing the number of shapes that could be used in the exemplars (so, instead of just circles and squares, the shapes might be circles, squares, or triangles; see Figure 2B). Alternatively, increased across-feature variation might be achieved by increasing the overall number of features that varied across exemplars. Thus, the shapes might vary, but now their colors might vary also (see Figure 2C).

Bottom Line: As a result, the way that they interact with feature variation is unclear.Both sets of results indicated that learning a relational category from a training set with a lower amount of variation is easier, but that learning from a training set with increased within-feature variation is significantly less challenging than learning from a set with increased across-feature variation.These results support the claim that, like feature-based category-learning, relational category-learning is sensitive to the type of feature variation in the training set.

View Article: PubMed Central - PubMed

Affiliation: Department of Cognitive Science, University of California, Merced, Merced, CA USA.

ABSTRACT
Learning of feature-based categories is known to interact with feature-variation in a variety of ways, depending on the type of variation (e.g., Markman and Maddox, 2003). However, relational categories are distinct from feature-based categories in that they determine membership based on structural similarities. As a result, the way that they interact with feature variation is unclear. This paper explores both experimental and computational data and argues that, despite its reliance on structural factors, relational category-learning should still be affected by the type of feature variation present during the learning process. It specifically suggests that within-feature and across-feature variation should produce different learning trajectories due to a difference in representational cost. The paper then uses the DORA model (Doumas et al., 2008) to discuss how this account might function in a cognitive system before presenting an experiment aimed at testing this account. The experiment was a relational category-learning task and was run on human participants and then simulated in DORA. Both sets of results indicated that learning a relational category from a training set with a lower amount of variation is easier, but that learning from a training set with increased within-feature variation is significantly less challenging than learning from a set with increased across-feature variation. These results support the claim that, like feature-based category-learning, relational category-learning is sensitive to the type of feature variation in the training set.

No MeSH data available.