Chrome Extension
WeChat Mini Program
Use on ChatGLM

Grouping by Similarity Helps Concept Learning.

Cognitive Science(2013)

Cited 1|Views28
No score
Abstract
Grouping by Similarity Helps Concept Learning Erik Weitnauer (eweitnau@techfak.uni-bielefeld.de) CITEC, Bielefeld University, Universit¨atsstr. 21-23, 33615 Bielefeld, Germany Paulo F. Carvalho (pcarvalh@indiana.edu) Department of Psychological and Brain Sciences, 1101 E 10th St Bloomington, IN 47405 USA Robert L. Goldstone (rgoldsto@indiana.edu) Department of Psychological and Brain Sciences, 1101 E 10th St Bloomington, IN 47405 USA Helge Ritter (helge@techfak.uni-bielefeld.de) CITEC, Bielefeld University, Universit¨atsstr. 21-23, 33615 Bielefeld, Germany Abstract In inductive learning, the order in which concept instances are presented plays an important role in learning performance. Theories predict that interleaving instances of different con- cepts is especially beneficial if the concepts are highly sim- ilar to each other, whereas blocking instances belonging to the same concept provides an advantage for learning low- similarity concept structures. This leaves open the question of the relative influence of similarity on interleaved versus blocked presentation. To answer this question, we pit within- and between-category similarity effects against each other in a rich categorization task called Physical Bongard Problems. We manipulate the similarity of instances shown temporally close to each other with blocked and interleaved presentation. The results indicate a stronger effect of similarity on interleaving than on blocking. They further show a large benefit of com- paring similar between-category instances on concept learning tasks where the feature dimensions are not known in advance but have to be constructed. Keywords: category learning; order effects; similarity Introduction Inductive learning is an essential cognitive ability which, by abstracting from specific examples, allows the transfer of ex- perience to new, similar situations. There is a significant body of evidence from cognitive psychology suggesting that com- parison of multiple cases represents a particularly promis- ing avenue for inductively learning difficult, relational con- cepts (Loewenstein & Gentner, 2005). Comparison not only takes representations as inputs to establish similarities, but also uses perceived similarities to establish new representa- tions (Hofstadter, 1996; Medin, Goldstone, & Gentner, 1993; Mitchell, 1993). When we compare entities, our understand- ing of the entities changes, and this may turn out to be a far more important consequence of comparison than simply de- riving an assessment of similarity. In this paper, we are in- terested in identifying optimal ways of organizing these com- parisons, and the kinds of cases that should be optimally com- pared. One major line of argument is that comparing instances of a concept with very dissimilar features should lead to the best induction and generalization for the concept. If comparison serves to highlight commonalities between instances of the same concept while de-emphasizing differences, comparing instances that share irrelevant features could result in those features being retained in a learner’s mental representation. This notion, called “conservative generalization” by Medin and Ross (1989) is that people will generalize as minimally as possible, preserving shared details unless there is a com- pelling reason to discard them. This, in turn, could limit gen- eralizability to new, dissimilar cases. Some research is con- sistent with this conclusion. For example, Halpern, Hansen, and Riefer (1990) asked students to read scientific passages that included either “near” (superficially similar) or “far” (su- perficially dissimilar) analogies. The passages that included far analogies led to superior retention, inference and transfer compared to those featuring superficially similar comparison, which showed no benefit at all. The conservative generalization principle predicts that in- creasing the similarity of simultaneously presented instances from one category will inhibit people’s ability to discover the rule that discriminates between the two categories. The true, discriminating rule will need to compete with many other possible hypotheses related to the many other features shared by the compared instances. By this account, decreasing the similarity of the compared instances that belong within a cat- egory will make it more likely that the proper grounds for generalization are inferred, by eliminating misleading com- mon features that lead to incorrect categorization rules. Results of Rost and McMurray (2009) on young infants learning to discriminate pairs of similar words point into the same direction. These authors found that increasing the within-category variability of the to-be-learned words by hav- ing different speakers repeat them increases the infants’ abil- ity to discriminate between the words. One of the potential explanations they give for their results is that young infants might still be unsure about what feature dimensions are rel- evant for the task and the variability in the irrelevant dimen-
More
Translated text
Key words
similarity,concept,learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined