• Skip to main content
  • Skip to footer

Larry G. Maguire

Essays on the meaning and purpose of daily work

Analogical Thinking: A Method For Solving Problems

Analogical Thinking: A Method For Solving Problems

13th May 2019 by Larry G. Maguire 2 Comments

How To Solve Problems By Analogy

The ability to solve problems is an essential skill for our survival and growth in the fast-paced, moment to moment shifting of modern society. No matter what the domain of expertise or work, challenges present themselves at an ever-increasing rate. And so it should be, for what is a life worth living if we never have problems to solve? We must accept that challenges are inherent in life, and so we must use our imagination and ingenuity to find solutions. Creativity and high performance require it. Although solving problems is never as simple as following a linear process, using lateral thinking processes for generating solutions is a skill we can cultivate, and in this week's article, I'm taking a look at a couple of examples of analogical thinking in practice. However, take into account that often switching off entirely from the problem can be the best route to the solution you need.

When I was a kid, growing up in the suburbs of Dublin City, we'd play in the grounds of an old farmhouse that stood in the middle of the housing estate. Cleavers 1 , wild grasses and other naturally occurring local plants grew wildly on the grounds. We called Cleavers, sticklebacks because they had little hooks all over that made them stick to our clothes. We would pull bunches of them and throw them at each other for fun.

Many plants growing wild in the countryside have evolved with this ability to latch on to other material like walls, trees, animal fur, other plants and the backs of children's jumpers. Ordinarily, as adults we don't pass any comment other than perhaps, “isn't that clever”. But in 1941 as George de Mestral 2 walked in the Jura Mountains with his dog, the clever ability of the Xanthium strumarium seed pods 3 to attach themselves to his clothes and his dog's fur captured his interest. Little did he realise, that this determined little seed pod would be the foundation for what would become a multimillion-dollar business.

George de Mestral, Inventor

George de Mestral was born into a middle-class Swiss family in June 1907. His father, Albert was a civil engineer and no doubt had a significant influence on the developing mind of his son, with young George showing his creative ability by designing and patenting a toy aeroplane at age 12. De Mestral attended the highly respected Ecole Polytechnique Federale de Lausanne on the shores of Lake Geneva, Switzerland where he studied engineering. Completing his studies, he secured employment in a Swiss engineering company where he honed his technical skills.

De Mestral also enjoyed hunting in the mountains and on one particular occasion in 1941, as the story goes, he was prompted to investigate the means by which those stubborn cockleburs adhered to his clothes. Upon examining the seed pod under a microscope he noticed hundreds of tiny hooks that covered the outer husk of the seed pod. It's likely that de Mestral required many exposures to the stubborn cocklebur to prompt his inquiry, however, given his inventive mind, he somehow made a connection between what he observed and its possible commercial use.

George de Mestral, creator of Velcro hook and loop fastening system used analogical thinking

He thought that if he could somehow employ the principle used by the cocklebur to fabricate a synthetic fastening system, he would have a solution to the problems occurring with conventional fasteners of the time. De Mestral conceptualised what he wanted to create, but coming up with a practical design took considerable time. Clothing manufacturers didn't take him seriously and he encountered many practical challenges in bringing his idea to life. After many attempts, he eventually found a manufacturer in Lyon, France who was willing to work with him and together they combined the toughness of nylon with cotton to create the first working prototype.

With the new material, he was able to recreate the tiny microscopic hooks he’d observed under a microscope all those years before. Proving his concept, he soon after applied and received a patent for his invention and launched his manufacturing business which he named Velcro 4 , a combination of the French words “velours” (velvet) and “crochet” (hook).

It took nearly fifteen years of research before he was finally able to successfully reproduce the natural fastening system he had seen on the Xanthium strumarium seed pods, but he stuck to his idea – a testament to his belief in the solution he had found.

De Mestral's Use Of Analogical Thinking

Despite its widespread use today, Velcro was not an immediate commercial success for de Mestral. However, by the early 1960s and the race to reach the moon, it seems that Velcro was in the right place at the right time. With the developing needs of the aerospace industry and the successful use of Velcro by NASA, the clothing and sportswear industries also realised the possibilities that de Mestral's product presented. Soon Velcro was selling over 60 million meters of hook-and-loop fastener per year, and de Mestral became a multimillionaire.

Whether he realised it or not, de Mestral used what today we term “analogical thinking” or analogical reasoning; the process of finding a solution to a problem by finding a similar problem with a known solution and applying that solution to the current situation.

An  analogy  is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar.  Analogical reasoning  is any type of thinking that relies upon an analogy 5 Stanford Encyclopedia of Philosophy

What Is Analogical Thinking?

The world-renowned writer and philosopher, Edward de Bono 6 , creator of the term “lateral thinking”, says that the analogy technique for generating ideas is a means to get some movement going, to start a train of thought. The challenge for us, when presented with a difficult problem, is that we can become hemmed in by traditional habitual thinking. Thinking laterally through the use of analogy helps to bring about a shift away from this habitual thinking.

In his book, Lateral Thinking 7 , first published almost fifty years ago, de Bono suggests that lateral thinking, of which thinking by analogy is an aspect, is the opposite of traditional vertical thinking. Although he also says that both lateral thinking and vertical thinking can work together rather than in opposition.

Thinking by analogy helps to bring about creativity and insight and is a system of thought that can be learned. The analogy is a simple story that becomes an analogy when it is compared to the current problematic condition. The story employed must have a process that can we can follow, that we can easily understand and apply to the present circumstance. For example, you might criticise a tradesperson for creating such a mess in your home, and he may suggest that to make an omelette he has to break some eggs.

Yeah, says you. Just please don't break them all over the good carpet!

Analogical Thinking Experiment

In 1980, Mary Gick and Keith Holyoak at the University of Michigan investigated the role of analogical thinking in psychological mechanisms that underlie creative insight. In their study 8 they suggested that anecdotal reports of creative scientists and mathematicians suggested that their development of new theories often depended on noticing and applying an analogy drawn from different domains of knowledge. Analogies cited included the hydraulic model of the blood circulatory system and the planetary model of the atomic structure of matter.

The fortress story used in analogical thinking experiment

In their experiment, Gick and Holyoak presented subjects first with a military story. In the story, an army General wishes to capture a fortress located in the centre of a country to which there are several access roads. All have been mined so that while small groups of men can pass through safely, a large number will detonate the mines. A full-scale direct attack is therefore impossible. The General’s solution is to divide his army into small groups, send each group to the head of a different road, and have the groups converge simultaneously on the fortress.

Participants are then asked to find a solution to the following medical problem

A doctor is faced with a patient who has a malignant tumour in his stomach. It is impossible to operate on the patient, but unless the tumour is destroyed the patient will die. There is an x-ray that can be used to destroy the tumour but unfortunately, at the required intensity, the surrounding healthy tissue will also be destroyed. At a lower intensity, the rays are harmless to healthy tissue, but they will not affect the tumour either. What type of procedure might be used to destroy the tumour with the rays, and at the same time avoid killing the healthy tissue?

The Results

The researchers were interested to know how participants would represent the analogical relationship between the story and the problem and generate a workable solution. For participants who didn't receive the military story, only 10% managed to generate the solution to the problem. This percentage rose to 30% for those who received the story in advance of the problem. Interestingly, the result climbed to 75% when participants read more than one analogous story.

Results from the study provide experimental evidence that solutions to problems can be generated using an analogous problem from a very different domain. However, the researchers caution against the assumption that solving problems by analogy may not deliver positive results where the problems are more complex.

Success is also dependant on the individual's exposure to similar conditions in the past, with increased exposure likely to yield more consistent results in solving similar problems.

The Apple Analogy

My sons are aged 11 and 12, and they regularly find challenges with mathematics, just like most kids do. Mathematics is an abstract system of thinking and I can understand the difficulty children may have from time to time getting to grips with it. The terminology is alien and they need to build out concepts and schemas for what is essentially a new and complex language.

They are learning how to work with fractions, percentages and ratios and most of the time they navigate their way successfully, but occasionally they get stumped and ask for help. When they do I always bring in the apple analogy.

One maths question asked my son to divide an amount of money between John and Edward in the ratio of 12 to 9 respectively. My son reckoned that wasn't a fair split. I told him John worked harder than Edward and we proceeded.

I asked him first to consider the amount of money as an apple and asked him what we would need to do to share the apple so that John got 12 pieces and Edward got 9. He correctly said, slice the apple into 21 equal pieces, give John 12 and Edward 9. So now, I said, can we split this money up in the same way? We were on the pigs back.

I always use the apple analogy for the kids' maths problems and it works very well.

Final Thoughts

I remember about 10 years ago my business was in the toilet and I was under enormous financial stress. Every day was a fight with myself and everyone around me. Most days I managed things as well as possible, but other days I was beaten. I can safely say, that no amount of input from those who could see what I couldn't, no amount analogical thinking would have helped me. I was in a prolonged state of hyperactivity and awareness of the problems. Neurochemically my brain could simply not operate in my favour. When I look back now I realise that those set of circumstances simply needed to burn themselves out.

Actively trying to solve an apparent problem can often be problematic in itself. By virtue of our focus on the problem, we often can't see the solutions and there's no amount of thinking can relieve us from the predicament. Analogical thinking has a firm place in creative pursuits, however, it can only be successfully employed when we are in a calm and collected state of mind.

Therefore, I believe that our job in performing to the highest level no matter what our domain of expertise, is to cultivate a stable and measured state of mind. In that place, we can encourage access to parts of the mind that lie beyond our conscious thought and receive answers to life's most complex problems.

Article references

  • Design, C. W. (n.d.). Irish Wildflowers Irish Wild Plants Irish Wild Flora Wildflowers of Ireland. Retrieved May 12, 2019, from http://www.wildflowersofireland.net/plant_detail.php?id_flower=64&wildflower=Cleavers
  • MIT Program. (n.d.). Retrieved May 12, 2019, from https://lemelson.mit.edu/resources/george-de-mestral
  • The Remarkable Cocklebur. (n.d.). Retrieved May 12, 2019, from https://www2.palomar.edu/users/warmstrong/plapr98.htm
  • Swearingen, J. (n.d.). An Idea That Stuck: How George de Mestral Invented the Velcro Fastener. Retrieved May 12, 2019, from http://nymag.com/vindicated/2016/11/an-idea-that-stuck-how-george-de-mestral-invented-velcro.html
  • Bartha, P. (2019, January 25). Analogy and Analogical Reasoning. Retrieved May 12, 2019, from https://plato.stanford.edu/entries/reasoning-analogy/
  • Bono, E. D. (n.d.). Dr. Edward de Bono. Retrieved May 13, 2019, from https://www.edwdebono.com
  • Bono, E. D. (2016).  Lateral thinking: A textbook of creativity . London: Penguin Life.
  • Gick, M. L., & Holyoak, K. J. (1980). Analogical problem-solving.  Cognitive psychology ,  12 (3), 306-355.

Author | Larry G. Maguire

' src=

Reader Interactions

Leave a reply cancel reply.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Read The Sunday Letters Newsletter

Subscribe to the newsletter that discusses life, work, and the pursuit of happiness.

Read Sunday Letters

Insert/edit link

Enter the destination URL

Or link to existing content

QuillBot logo

Get creative with full-sentence rewrites

Polish your papers with one click, avoid unintentional plagiarism.

  • What Is Analogical Reasoning? | Definition & Examples

What Is Analogical Reasoning? | Definition & Examples

Published on April 9, 2024 by Magedah Shabo . Revised on August 22, 2024.

Analogical reasoning involves identifying similarities between different situations or concepts to make inferences or solve problems. It is sometimes classified as a subcategory of inductive reasoning .

Using analogical reasoning, we can draw upon existing knowledge and patterns to understand new or unfamiliar situations, applying solutions or insights from one context to another.

Analogy-based reasoning plays an important role in problem-solving, decision-making, and creative thinking.

Free Grammar Checker

Table of contents

What is analogical reasoning, analogical reasoning examples, false analogies and weak analogies, frequently asked questions about analogical reasoning.

Analogical reasoning occurs when we draw conclusions about a new situation based on similarities with a known situation.

This cognitive process is crucial to our ability to recognize patterns and apply existing knowledge to novel situations. However, analogical reasoning can be impacted by cognitive biases , or common irrational thought patterns:

  • Confirmation bias : Focusing on confirming beliefs may result in paying selective attention to analogical similarities and ignoring important differences.
  • Representativeness heuristic : Judging analogies based on their resemblance to prototypical examples may lead to overlooking nuance and variation.

Analogical vs inductive reasoning

Analogical reasoning is sometimes classified as a specific type of inductive reasoning , but some consider it a distinct form of reasoning altogether .

  • Inductive reasoning involves making generalizations based on specific observations or evidence.
  • Analogical reasoning involves identifying similarities between different situations to make inferences or solve problems.

Both are forms of ampliative reasoning , which is defined by extrapolating insights from one context to another, drawing connections, and identifying similarities between disparate situations.

In contrast to deductive reasoning , which involves deriving specific conclusions from general principles, all forms of ampliative reasoning introduce new propositions that extend beyond the information provided in the premises .

Abductive reasoning is a third, distinct form of ampliative reasoning. It involves generating plausible explanations for observed phenomena based on available evidence.

Examples of analogical reasoning can be found in a wide variety of contexts, from academic and professional contexts to everyday life.

Analogical reasoning can also be used persuasively in political discourse to draw parallels between a familiar situation or concept and a more novel proposal.

Many aspects of everyday life are affected by analogical reasoning :

  • Decision-making : Comparing different options and predicting their potential outcomes based on comparable situations (e.g., deciding whether to invest in a new business venture by comparing it to previous successful investments)
  • Persuasion : Relating something novel to something familiar to make it seem appealing and easy to understand (e.g., encouraging a child to try a new food by comparing it to something they already enjoy)
  • Education : Drawing parallels between unfamiliar ideas and familiar ones to facilitate comprehension and retention (e.g., explaining the concept of fractions by comparing numbers to slices of pizza)
  • Creativity : Innovating by exploring connections between seemingly unrelated concepts or domains (e.g., drawing inspiration for an airplane’s design from birds’ wings)

Errors in analogical reasoning can result in two similar, yet distinct logical fallacies :

  • Example: Comparing the legality of kitchen knives to the legality of automatic firearms is a false analogy because the purposes and potential misuses of these items are fundamentally different.
  • Example: Comparing a nation’s economy to a household budget may seem relevant, but the vast differences in scale and complexity render the analogy weak in some contexts.

Both are considered informal logical fallacies because the error is related to flawed content rather than a flawed structure.

An example of analogical reasoning in everyday life is the expression “Love is a battlefield.” This analogy emphasizes the challenges, conflicts, and emotional turmoil that can occur in relationships. It suggests that navigating romantic relationships requires strategy, resilience, and sometimes sacrifice, much like a physical battle.

To determine the strength of analogical reasoning , the most important question to ask is whether the similarities between the two situations or entities being compared are relevant and meaningful to the conclusion being drawn.

Analogical reasoning and the representative heuristic both involve making judgments based on similarities between objects or situations, but there is a key difference:

  • Analogical reasoning : A process of drawing conclusions or making inferences about a new or unfamiliar situation based on similarities with a known or familiar situation
  • Representative heuristic : A mental shortcut or rule of thumb used to make judgments based on how closely an object or situation resembles a typical example or prototype

Analogical reasoning is sometimes considered a subcategory of inductive reasoning because it involves generalizing from specific instances to derive broader principles or patterns. However, some argue that analogical reasoning is distinct from induction because it involves drawing conclusions based on similarities between cases rather than generalizing from specific instances.

Along with abductive reasoning , they are forms of ampliative reasoning (in contrast to deductive reasoning ).

Is this article helpful?

Magedah Shabo

Magedah Shabo

Other students also liked, disjunctive syllogism | definition & examples, enquire vs inquire | difference, definitions & examples.

All Subjects

study guides for every class

That actually explain what's on your next test, analogical problem-solving, from class:, intro to brain and behavior.

Analogical problem-solving is a cognitive process that involves using the knowledge and solutions from a previous, similar situation to address a new problem. This method leverages the structural similarities between the two problems, allowing individuals to draw parallels and apply learned strategies from past experiences to effectively tackle current challenges. It plays a crucial role in decision-making by facilitating insight and creativity when direct solutions are not readily available.

congrats on reading the definition of analogical problem-solving . now let's actually learn it.

5 Must Know Facts For Your Next Test

  • Analogical problem-solving can enhance creativity by allowing individuals to see connections between seemingly unrelated situations.
  • Research shows that people who are trained to recognize analogies tend to perform better in solving complex problems.
  • This method can be influenced by how well the initial problem is understood and represented, as accurate representation increases the likelihood of successful analogical reasoning.
  • Analogical problem-solving is not only used in everyday decision-making but also in fields like science and engineering, where past solutions can guide new innovations.
  • Individuals may rely on analogies more heavily under conditions of uncertainty or when they lack sufficient information about the new problem.

Review Questions

  • Analogical problem-solving enhances creativity by encouraging individuals to draw connections between different situations, leading to innovative solutions. When faced with a new challenge, leveraging knowledge from previous experiences allows for diverse approaches that may not be immediately obvious. This creative process can result in novel ideas and solutions that stem from recognizing patterns and similarities across various contexts.
  • Problem representation plays a critical role in analogical problem-solving because how a problem is understood affects the ability to find relevant analogies. An accurate and clear representation allows individuals to see structural similarities between problems, which is essential for successful analogical reasoning. If the current problem is misrepresented or poorly understood, it becomes challenging to identify applicable past experiences, ultimately hindering effective problem-solving.
  • Relying on analogical problem-solving in scientific research and innovation can lead to significant advancements by allowing researchers to apply established theories and solutions to new problems. This approach fosters interdisciplinary thinking and enables scientists to leverage prior knowledge creatively. However, it also carries risks; if the analogies drawn are weak or misleading, it could result in incorrect conclusions or ineffective solutions. Therefore, while beneficial, it is essential for researchers to critically assess the relevance of their analogies to ensure valid outcomes.

Related terms

Heuristic : A mental shortcut or rule of thumb that simplifies decision-making processes, often leading to quicker but sometimes less accurate solutions.

Problem representation : The mental depiction or conceptualization of a problem, which significantly affects how one approaches finding a solution.

Insight : A sudden realization or understanding of the solution to a problem, often occurring after a period of contemplation or incubation.

" Analogical problem-solving " also found in:

Subjects ( 1 ).

  • Intro to Psychology

© 2024 Fiveable Inc. All rights reserved.

Ap® and sat® are trademarks registered by the college board, which is not affiliated with, and does not endorse this website..

Breadcrumbs Section. Click here to navigate to respective pages.

The Development of Analogical Problem Solving

The Development of Analogical Problem Solving

DOI link for The Development of Analogical Problem Solving

Click here to navigate to parent product.

The transfer of existing knowledge to new but closely related problems and situations has been a topic of continuing interest to psychologists throughout the 20th century. Historically, this kind of transfer, here called analogical reasoning, has been studied in diverse theoretical contexts under a variety of labels. For example, generalization due to identical (or common) elements (e.g., Cantor, 1965; Hull, 1939; Spence, 1937, 1942; Thorndike, 1923; Thorndike & Woodworth, 1901), resonance effects of signals (e.g., Dunker, 1945; Luchins, 1942; Sobel, 1939), and the mapping of structural relations from the known to the new (e.g., Inhelder & Piaget, 1958; Judd, 1908; Spearman, 1923). More recently, researchers in three relatively insular disciplines have focused on analogical reasoning. First, cognitive scientists have proposed that analogy plays a principal role in the induction mechanisms of intelligent systems, both biological and electronic. Thus models and simulations have appeared with increasing frequency in that literature (e.g., Burstein, 1986; Carbonell, 1986; Falkenhainer, Forbus, & Gentner, 1986; Holland, Holyoak, Nisbett, & Thagard, 1986; Sweller, 1988; Winston, 1980, 1984). Second, the role of analogy in mathematical problem solving has attracted considerable attention (e.g., Cooper & Sweller, 1987; Kintsch & Greeno, 1985; Novick, 1988; Reed, 1987; Read, Dempster, & Ettinger, 1985; Ross, 1987; Silver, 1981). Finally, some psychologists have focused their research efforts on attempts to understand the development of analogical reasoning processes (e.g., Alexander,Willson, White, & Fuqua, 1987; Brown, Kane, & Echols, 1986; Gentner, 1977; Gentner & Toupin, 1986; Gholson, Eymard, Morgan, & Kamhi, 1987; Holyoak, 1984; Holyoak, Junn, & Billman, 1984; Sternberg, 1985; Sternberg & Rifkin, 1979). This convergence of research and theory reflects an emerging theoretical consensus in which analogical reasoning is taken as an essential feature of learning and problem solving (Brown & Campione, 1984; Gentner, 1989; Gholson, Eymard, Long, Morgan, & Leeming, 1988), playing an important role in, among other things, classroom learning (Brown, 1989; Brown et al., 1986; Sternberg, 1985) and the various enterprises of science (Gentner, 1983; Hesse, 1966; Nersessian, 1984).

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Analogy and Analogical Reasoning

An analogy is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar. Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further similarity exists. In general (but not always), such arguments belong in the category of ampliative reasoning, since their conclusions do not follow with certainty but are only supported with varying degrees of strength. However, the proper characterization of analogical arguments is subject to debate (see §2.2 ).

Analogical reasoning is fundamental to human thought and, arguably, to some nonhuman animals as well. Historically, analogical reasoning has played an important, but sometimes mysterious, role in a wide range of problem-solving contexts. The explicit use of analogical arguments, since antiquity, has been a distinctive feature of scientific, philosophical and legal reasoning. This article focuses primarily on the nature, evaluation and justification of analogical arguments. Related topics include metaphor , models in science , and precedent and analogy in legal reasoning .

1. Introduction: the many roles of analogy

2.1 examples, 2.2 characterization, 2.3 plausibility, 2.4 analogical inference rules, 3.1 commonsense guidelines, 3.2 aristotle’s theory, 3.3 material criteria: hesse’s theory, 3.4 formal criteria: the structure-mapping theory, 3.5 other theories, 3.6 practice-based approaches, 4.1 deductive justification, 4.2 inductive justification, 4.3 a priori justification, 4.4 pragmatic justification, 5.1 analogy and confirmation, 5.2 conceptual change and theory development, online manuscript, related entries.

Analogies are widely recognized as playing an important heuristic role, as aids to discovery. They have been employed, in a wide variety of settings and with considerable success, to generate insight and to formulate possible solutions to problems. According to Joseph Priestley, a pioneer in chemistry and electricity,

analogy is our best guide in all philosophical investigations; and all discoveries, which were not made by mere accident, have been made by the help of it. (1769/1966: 14)

Priestley may be over-stating the case, but there is no doubt that analogies have suggested fruitful lines of inquiry in many fields. Because of their heuristic value, analogies and analogical reasoning have been a particular focus of AI research. Hájek (2018) examines analogy as a heuristic tool in philosophy.

Example 1 . Hydrodynamic analogies exploit mathematical similarities between the equations governing ideal fluid flow and torsional problems. To predict stresses in a planned structure, one can construct a fluid model, i.e., a system of pipes through which water passes (Timoshenko and Goodier 1970). Within the limits of idealization, such analogies allow us to make demonstrative inferences, for example, from a measured quantity in the fluid model to the analogous value in the torsional problem. In practice, there are numerous complications (Sterrett 2006).

At the other extreme, an analogical argument may provide very weak support for its conclusion, establishing no more than minimal plausibility. Consider:

Example 2 . Thomas Reid’s (1785) argument for the existence of life on other planets (Stebbing 1933; Mill 1843/1930; Robinson 1930; Copi 1961). Reid notes a number of similarities between Earth and the other planets in our solar system: all orbit and are illuminated by the sun; several have moons; all revolve on an axis. In consequence, he concludes, it is “not unreasonable to think, that those planets may, like our earth, be the habitation of various orders of living creatures” (1785: 24).

Such modesty is not uncommon. Often the point of an analogical argument is just to persuade people to take an idea seriously. For instance:

Example 3 . Darwin takes himself to be using an analogy between artificial and natural selection to argue for the plausibility of the latter:

Why may I not invent the hypothesis of Natural Selection (which from the analogy of domestic productions, and from what we know of the struggle of existence and of the variability of organic beings, is, in some very slight degree, in itself probable) and try whether this hypothesis of Natural Selection does not explain (as I think it does) a large number of facts…. ( Letter to Henslow , May 1860 in Darwin 1903)

Here it appears, by Darwin’s own admission, that his analogy is employed to show that the hypothesis is probable to some “slight degree” and thus merits further investigation. Some, however, reject this characterization of Darwin’s reasoning (Richards 1997; Gildenhuys 2004).

Sometimes analogical reasoning is the only available form of justification for a hypothesis. The method of ethnographic analogy is used to interpret

the nonobservable behaviour of the ancient inhabitants of an archaeological site (or ancient culture) based on the similarity of their artifacts to those used by living peoples. (Hunter and Whitten 1976: 147)

For example:

Example 4 . Shelley (1999, 2003) describes how ethnographic analogy was used to determine the probable significance of odd markings on the necks of Moche clay pots found in the Peruvian Andes. Contemporary potters in Peru use these marks (called sígnales ) to indicate ownership; the marks enable them to reclaim their work when several potters share a kiln or storage facility. Analogical reasoning may be the only avenue of inference to the past in such cases, though this point is subject to dispute (Gould and Watson 1982; Wylie 1982, 1985). Analogical reasoning may have similar significance for cosmological phenomena that are inaccessible due to limits on observation (Dardashti et al. 2017). See §5.1 for further discussion.

As philosophers and historians such as Kuhn (1996) have repeatedly pointed out, there is not always a clear separation between the two roles that we have identified, discovery and justification. Indeed, the two functions are blended in what we might call the programmatic (or paradigmatic ) role of analogy: over a period of time, an analogy can shape the development of a program of research. For example:

Example 5 . An ‘acoustical analogy’ was employed for many years by certain nineteenth-century physicists investigating spectral lines. Discrete spectra were thought to be

completely analogous to the acoustical situation, with atoms (and/or molecules) serving as oscillators originating or absorbing the vibrations in the manner of resonant tuning forks. (Maier 1981: 51)

Guided by this analogy, physicists looked for groups of spectral lines that exhibited frequency patterns characteristic of a harmonic oscillator. This analogy served not only to underwrite the plausibility of conjectures, but also to guide and limit discovery by pointing scientists in certain directions.

More generally, analogies can play an important programmatic role by guiding conceptual development (see §5.2 ). In some cases, a programmatic analogy culminates in the theoretical unification of two different areas of inquiry.

Example 6 . Descartes’s (1637/1954) correlation between geometry and algebra provided methods for systematically handling geometrical problems that had long been recognized as analogous. A very different relationship between analogy and discovery exists when a programmatic analogy breaks down, as was the ultimate fate of the acoustical analogy. That atomic spectra have an entirely different explanation became clear with the advent of quantum theory. In this case, novel discoveries emerged against background expectations shaped by the guiding analogy. There is a third possibility: an unproductive or misleading programmatic analogy may simply become entrenched and self-perpetuating as it leads us to “construct… data that conform to it” (Stepan 1996: 133). Arguably, the danger of this third possibility provides strong motivation for developing a critical account of analogical reasoning and analogical arguments.

Analogical cognition , which embraces all cognitive processes involved in discovering, constructing and using analogies, is broader than analogical reasoning (Hofstadter 2001; Hofstadter and Sander 2013). Understanding these processes is an important objective of current cognitive science research, and an objective that generates many questions. How do humans identify analogies? Do nonhuman animals use analogies in ways similar to humans? How do analogies and metaphors influence concept formation?

This entry, however, concentrates specifically on analogical arguments. Specifically, it focuses on three central epistemological questions:

  • What criteria should we use to evaluate analogical arguments?
  • What philosophical justification can be provided for analogical inferences?
  • How do analogical arguments fit into a broader inferential context (i.e., how do we combine them with other forms of inference), especially theoretical confirmation?

Following a preliminary discussion of the basic structure of analogical arguments, the entry reviews selected attempts to provide answers to these three questions. To find such answers would constitute an important first step towards understanding the nature of analogical reasoning. To isolate these questions, however, is to make the non-trivial assumption that there can be a theory of analogical arguments —an assumption which, as we shall see, is attacked in different ways by both philosophers and cognitive scientists.

2. Analogical arguments

Analogical arguments vary greatly in subject matter, strength and logical structure. In order to appreciate this variety, it is helpful to increase our stock of examples. First, a geometric example:

Example 7 (Rectangles and boxes). Suppose that you have established that of all rectangles with a fixed perimeter, the square has maximum area. By analogy, you conjecture that of all boxes with a fixed surface area, the cube has maximum volume.

Two examples from the history of science:

Example 8 (Morphine and meperidine). In 1934, the pharmacologist Schaumann was testing synthetic compounds for their anti-spasmodic effect. These drugs had a chemical structure similar to morphine. He observed that one of the compounds— meperidine , also known as Demerol —had a physical effect on mice that was previously observed only with morphine: it induced an S-shaped tail curvature. By analogy, he conjectured that the drug might also share morphine’s narcotic effects. Testing on rats, rabbits, dogs and eventually humans showed that meperidine, like morphine, was an effective pain-killer (Lembeck 1989: 11; Reynolds and Randall 1975: 273).

Example 9 (Priestley on electrostatic force). In 1769, Priestley suggested that the absence of electrical influence inside a hollow charged spherical shell was evidence that charges attract and repel with an inverse square force. He supported his hypothesis by appealing to the analogous situation of zero gravitational force inside a hollow shell of uniform density.

Finally, an example from legal reasoning:

Example 10 (Duty of reasonable care). In a much-cited case ( Donoghue v. Stevenson 1932 AC 562), the United Kingdom House of Lords found the manufacturer of a bottle of ginger beer liable for damages to a consumer who became ill as a result of a dead snail in the bottle. The court argued that the manufacturer had a duty to take “reasonable care” in creating a product that could foreseeably result in harm to the consumer in the absence of such care, and where the consumer had no possibility of intermediate examination. The principle articulated in this famous case was extended, by analogy, to allow recovery for harm against an engineering firm whose negligent repair work caused the collapse of a lift ( Haseldine v. CA Daw & Son Ltd. 1941 2 KB 343). By contrast, the principle was not applicable to a case where a workman was injured by a defective crane, since the workman had opportunity to examine the crane and was even aware of the defects ( Farr v. Butters Brothers & Co. 1932 2 KB 606).

What, if anything, do all of these examples have in common? We begin with a simple, quasi-formal characterization. Similar formulations are found in elementary critical thinking texts (e.g., Copi and Cohen 2005) and in the literature on argumentation theory (e.g., Govier 1999, Guarini 2004, Walton and Hyra 2018). An analogical argument has the following form:

  • \(S\) is similar to \(T\) in certain (known) respects.
  • \(S\) has some further feature \(Q\).
  • Therefore, \(T\) also has the feature \(Q\), or some feature \(Q^*\) similar to \(Q\).

(1) and (2) are premises. (3) is the conclusion of the argument. The argument form is ampliative ; the conclusion is not guaranteed to follow from the premises.

\(S\) and \(T\) are referred to as the source domain and target domain , respectively. A domain is a set of objects, properties, relations and functions, together with a set of accepted statements about those objects, properties, relations and functions. More formally, a domain consists of a set of objects and an interpreted set of statements about them. The statements need not belong to a first-order language, but to keep things simple, any formalizations employed here will be first-order. We use unstarred symbols \((a, P, R, f)\) to refer to items in the source domain and starred symbols \((a^*, P^*, R^*, f^*)\) to refer to corresponding items in the target domain. In Example 9 , the source domain items pertain to gravitation; the target items pertain to electrostatic attraction.

Formally, an analogy between \(S\) and \(T\) is a one-to-one mapping between objects, properties, relations and functions in \(S\) and those in \(T\). Not all of the items in \(S\) and \(T\) need to be placed in correspondence. Commonly, the analogy only identifies correspondences between a select set of items. In practice, we specify an analogy simply by indicating the most significant similarities (and sometimes differences).

We can improve on this preliminary characterization of the argument from analogy by introducing the tabular representation found in Hesse (1966). We place corresponding objects, properties, relations and propositions side-by-side in a table of two columns, one for each domain. For instance, Reid’s argument ( Example 2 ) can be represented as follows (using \(\Rightarrow\) for the analogical inference):

  Earth \((S)\)   Mars \((T)\)
Known similarities:
orbits the sun \(\leftarrow\)horizontal\( \rightarrow\) orbits the sun
has a moon has moons
revolves on axis revolves on axis
subject to gravity subject to gravity
Inferred similarity:
  supports life \(\Rightarrow\) support life

Hesse introduced useful terminology based on this tabular representation. The horizontal relations in an analogy are the relations of similarity (and difference) in the mapping between domains, while the vertical relations are those between the objects, relations and properties within each domain. The correspondence (similarity) between earth’s having a moon and Mars’ having moons is a horizontal relation; the causal relation between having a moon and supporting life is a vertical relation within the source domain (with the possibility of a distinct such relation existing in the target as well).

In an earlier discussion of analogy, Keynes (1921) introduced some terminology that is also helpful.

Positive analogy . Let \(P\) stand for a list of accepted propositions \(P_1 , \ldots ,P_n\) about the source domain \(S\). Suppose that the corresponding propositions \(P^*_1 , \ldots ,P^*_n\), abbreviated as \(P^*\), are all accepted as holding for the target domain \(T\), so that \(P\) and \(P^*\) represent accepted (or known) similarities. Then we refer to \(P\) as the positive analogy .

Negative analogy . Let \(A\) stand for a list of propositions \(A_1 , \ldots ,A_r\) accepted as holding in \(S\), and \(B^*\) for a list \(B_1^*, \ldots ,B_s^*\) of propositions holding in \(T\). Suppose that the analogous propositions \(A^* = A_1^*, \ldots ,A_r^*\) fail to hold in \(T\), and similarly the propositions \(B = B_1 , \ldots ,B_s\) fail to hold in \(S\), so that \(A, {\sim}A^*\) and \({\sim}B, B^*\) represent accepted (or known) differences. Then we refer to \(A\) and \(B\) as the negative analogy .

Neutral analogy . The neutral analogy consists of accepted propositions about \(S\) for which it is not known whether an analogue holds in \(T\).

Finally we have:

Hypothetical analogy . The hypothetical analogy is simply the proposition \(Q\) in the neutral analogy that is the focus of our attention.

These concepts allow us to provide a characterization for an individual analogical argument that is somewhat richer than the original one.

An analogical argument may thus be summarized:

It is plausible that \(Q^*\) holds in the target, because of certain known (or accepted) similarities with the source domain, despite certain known (or accepted) differences.

In order for this characterization to be meaningful, we need to say something about the meaning of ‘plausibly.’ To ensure broad applicability over analogical arguments that vary greatly in strength, we interpret plausibility rather liberally as meaning ‘with some degree of support’. In general, judgments of plausibility are made after a claim has been formulated, but prior to rigorous testing or proof. The next sub-section provides further discussion.

Note that this characterization is incomplete in a number of ways. The manner in which we list similarities and differences, the nature of the correspondences between domains: these things are left unspecified. Nor does this characterization accommodate reasoning with multiple analogies (i.e., multiple source domains), which is ubiquitous in legal reasoning and common elsewhere. To characterize the argument form more fully, however, is not possible without either taking a step towards a substantive theory of analogical reasoning or restricting attention to certain classes of analogical arguments.

Arguments by analogy are extensively discussed within argumentation theory. There is considerable debate about whether they constitute a species of deductive inference (Govier 1999; Waller 2001; Guarini 2004; Kraus 2015). Argumentation theorists also make use of tools such as speech act theory (Bermejo-Luque 2012), argumentation schemes and dialogue types (Macagno et al. 2017; Walton and Hyra 2018) to distinguish different types of analogical argument.

Arguments by analogy are also discussed in the vast literature on scientific models and model-based reasoning, following the lead of Hesse (1966). Bailer-Jones (2002) draws a helpful distinction between analogies and models. While “many models have their roots in an analogy” (2002: 113) and analogy “can act as a catalyst to aid modeling,” Bailer-Jones observes that “the aim of modeling has nothing intrinsically to do with analogy.” In brief, models are tools for prediction and explanation, whereas analogical arguments aim at establishing plausibility. An analogy is evaluated in terms of source-target similarity, while a model is evaluated on how successfully it “provides access to a phenomenon in that it interprets the available empirical data about the phenomenon.” If we broaden our perspective beyond analogical arguments , however, the connection between models and analogies is restored. Nersessian (2009), for instance, stresses the role of analog models in concept-formation and other cognitive processes.

To say that a hypothesis is plausible is to convey that it has epistemic support: we have some reason to believe it, even prior to testing. An assertion of plausibility within the context of an inquiry typically has pragmatic connotations as well: to say that a hypothesis is plausible suggests that we have some reason to investigate it further. For example, a mathematician working on a proof regards a conjecture as plausible if it “has some chances of success” (Polya 1954 (v. 2): 148). On both points, there is ambiguity as to whether an assertion of plausibility is categorical or a matter of degree. These observations point to the existence of two distinct conceptions of plausibility, probabilistic and modal , either of which may reflect the intended conclusion of an analogical argument.

On the probabilistic conception, plausibility is naturally identified with rational credence (rational subjective degree of belief) and is typically represented as a probability. A classic expression may be found in Mill’s analysis of the argument from analogy in A System of Logic :

There can be no doubt that every resemblance [not known to be irrelevant] affords some degree of probability, beyond what would otherwise exist, in favour of the conclusion. (Mill 1843/1930: 333)

In the terminology introduced in §2.2, Mill’s idea is that each element of the positive analogy boosts the probability of the conclusion. Contemporary ‘structure-mapping’ theories ( §3.4 ) employ a restricted version: each structural similarity between two domains contributes to the overall measure of similarity, and hence to the strength of the analogical argument.

On the alternative modal conception, ‘it is plausible that \(p\)’ is not a matter of degree. The meaning, roughly speaking, is that there are sufficient initial grounds for taking \(p\) seriously, i.e., for further investigation (subject to feasibility and interest). Informally: \(p\) passes an initial screening procedure. There is no assertion of degree. Instead, ‘It is plausible that’ may be regarded as an epistemic modal operator that aims to capture a notion, prima facie plausibility, that is somewhat stronger than ordinary epistemic possibility. The intent is to single out \(p\) from an undifferentiated mass of ideas that remain bare epistemic possibilities. To illustrate: in 1769, Priestley’s argument ( Example 9 ), if successful, would establish the prima facie plausibility of an inverse square law for electrostatic attraction. The set of epistemic possibilities—hypotheses about electrostatic attraction compatible with knowledge of the day—was much larger. Individual analogical arguments in mathematics (such as Example 7 ) are almost invariably directed towards prima facie plausibility.

The modal conception figures importantly in some discussions of analogical reasoning. The physicist N. R. Campbell (1957) writes:

But in order that a theory may be valuable it must … display an analogy. The propositions of the hypothesis must be analogous to some known laws…. (1957: 129)

Commenting on the role of analogy in Fourier’s theory of heat conduction, Campbell writes:

Some analogy is essential to it; for it is only this analogy which distinguishes the theory from the multitude of others… which might also be proposed to explain the same laws. (1957: 142)

The interesting notion here is that of a “valuable” theory. We may not agree with Campbell that the existence of analogy is “essential” for a novel theory to be “valuable.” But consider the weaker thesis that an acceptable analogy is sufficient to establish that a theory is “valuable”, or (to qualify still further) that an acceptable analogy provides defeasible grounds for taking the theory seriously. (Possible defeaters might include internal inconsistency, inconsistency with accepted theory, or the existence of a (clearly superior) rival analogical argument.) The point is that Campbell, following the lead of 19 th century philosopher-scientists such as Herschel and Whewell, thinks that analogies can establish this sort of prima facie plausibility. Snyder (2006) provides a detailed discussion of the latter two thinkers and their ideas about the role of analogies in science.

In general, analogical arguments may be directed at establishing either sort of plausibility for their conclusions; they can have a probabilistic use or a modal use. Examples 7 through 9 are best interpreted as supporting modal conclusions. In those arguments, an analogy is used to show that a conjecture is worth taking seriously. To insist on putting the conclusion in probabilistic terms distracts attention from the point of the argument. The conclusion might be modeled (by a Bayesian) as having a certain probability value because it is deemed prima facie plausible, but not vice versa. Example 2 , perhaps, might be regarded as directed primarily towards a probabilistic conclusion.

There should be connections between the two conceptions. Indeed, we might think that the same analogical argument can establish both prima facie plausibility and a degree of probability for a hypothesis. But it is difficult to translate between epistemic modal concepts and probabilities (Cohen 1980; Douven and Williamson 2006; Huber 2009; Spohn 2009, 2012). We cannot simply take the probabilistic notion as the primitive one. It seems wise to keep the two conceptions of plausibility separate.

Schema (4) is a template that represents all analogical arguments, good and bad. It is not an inference rule. Despite the confidence with which particular analogical arguments are advanced, nobody has ever formulated an acceptable rule, or set of rules, for valid analogical inferences. There is not even a plausible candidate. This situation is in marked contrast not only with deductive reasoning, but also with elementary forms of inductive reasoning, such as induction by enumeration.

Of course, it is difficult to show that no successful analogical inference rule will ever be proposed. But consider the following candidate, formulated using the concepts of schema (4) and taking us only a short step beyond that basic characterization.

Rule (5) is modeled on the straight rule for enumerative induction and inspired by Mill’s view of analogical inference, as described in §2.3. We use the generic phrase ‘degree of support’ in place of probability, since other factors besides the analogical argument may influence our probability assignment for \(Q^*\).

It is pretty clear that (5) is a non-starter. The main problem is that the rule justifies too much. The only substantive requirement introduced by (5) is that there be a nonempty positive analogy. Plainly, there are analogical arguments that satisfy this condition but establish no prima facie plausibility and no measure of support for their conclusions.

Here is a simple illustration. Achinstein (1964: 328) observes that there is a formal analogy between swans and line segments if we take the relation ‘has the same color as’ to correspond to ‘is congruent with’. Both relations are reflexive, symmetric, and transitive. Yet it would be absurd to find positive support from this analogy for the idea that we are likely to find congruent lines clustered in groups of two or more, just because swans of the same color are commonly found in groups. The positive analogy is antecedently known to be irrelevant to the hypothetical analogy. In such a case, the analogical inference should be utterly rejected. Yet rule (5) would wrongly assign non-zero degree of support.

To generalize the difficulty: not every similarity increases the probability of the conclusion and not every difference decreases it. Some similarities and differences are known to be (or accepted as being) utterly irrelevant and should have no influence whatsoever on our probability judgments. To be viable, rule (5) would need to be supplemented with considerations of relevance , which depend upon the subject matter, historical context and logical details particular to each analogical argument. To search for a simple rule of analogical inference thus appears futile.

Carnap and his followers (Carnap 1980; Kuipers 1988; Niiniluoto 1988; Maher 2000; Romeijn 2006) have formulated principles of analogy for inductive logic, using Carnapian \(\lambda \gamma\) rules. Generally, this body of work relates to “analogy by similarity”, rather than the type of analogical reasoning discussed here. Romeijn (2006) maintains that there is a relation between Carnap’s concept of analogy and analogical prediction. His approach is a hybrid of Carnap-style inductive rules and a Bayesian model. Such an approach would need to be generalized to handle the kinds of arguments described in §2.1 . It remains unclear that the Carnapian approach can provide a general rule for analogical inference.

Norton (2010, and 2018—see Other Internet Resources) has argued that the project of formalizing inductive reasoning in terms of one or more simple formal schemata is doomed. His criticisms seem especially apt when applied to analogical reasoning. He writes:

If analogical reasoning is required to conform only to a simple formal schema, the restriction is too permissive. Inferences are authorized that clearly should not pass muster… The natural response has been to develop more elaborate formal templates… The familiar difficulty is that these embellished schema never seem to be quite embellished enough; there always seems to be some part of the analysis that must be handled intuitively without guidance from strict formal rules. (2018: 1)

Norton takes the point one step further, in keeping with his “material theory” of inductive inference. He argues that there is no universal logical principle that “powers” analogical inference “by asserting that things that share some properties must share others.” Rather, each analogical inference is warranted by some local constellation of facts about the target system that he terms “the fact of analogy”. These local facts are to be determined and investigated on a case by case basis.

To embrace a purely formal approach to analogy and to abjure formalization entirely are two extremes in a spectrum of strategies. There are intermediate positions. Most recent analyses (both philosophical and computational) have been directed towards elucidating criteria and procedures, rather than formal rules, for reasoning by analogy. So long as these are not intended to provide a universal ‘logic’ of analogy, there is room for such criteria even if one accepts Norton’s basic point. The next section discusses some of these criteria and procedures.

3. Criteria for evaluating analogical arguments

Logicians and philosophers of science have identified ‘textbook-style’ general guidelines for evaluating analogical arguments (Mill 1843/1930; Keynes 1921; Robinson 1930; Stebbing 1933; Copi and Cohen 2005; Moore and Parker 1998; Woods, Irvine, and Walton 2004). Here are some of the most important ones:

These principles can be helpful, but are frequently too vague to provide much insight. How do we count similarities and differences in applying (G1) and (G2)? Why are the structural and causal analogies mentioned in (G5) and (G6) especially important, and which structural and causal features merit attention? More generally, in connection with the all-important (G7): how do we determine which similarities and differences are relevant to the conclusion? Furthermore, what are we to say about similarities and differences that have been omitted from an analogical argument but might still be relevant?

An additional problem is that the criteria can pull in different directions. To illustrate, consider Reid’s argument for life on other planets ( Example 2 ). Stebbing (1933) finds Reid’s argument “suggestive” and “not unplausible” because the conclusion is weak (G4), while Mill (1843/1930) appears to reject the argument on account of our vast ignorance of properties that might be relevant (G3).

There is a further problem that relates to the distinction just made (in §2.3 ) between two kinds of plausibility. Each of the above criteria apart from (G7) is expressed in terms of the strength of the argument, i.e., the degree of support for the conclusion. The criteria thus appear to presuppose the probabilistic interpretation of plausibility. The problem is that a great many analogical arguments aim to establish prima facie plausibility rather than any degree of probability. Most of the guidelines are not directly applicable to such arguments.

Aristotle sets the stage for all later theories of analogical reasoning. In his theoretical reflections on analogy and in his most judicious examples, we find a sober account that lays the foundation both for the commonsense guidelines noted above and for more sophisticated analyses.

Although Aristotle employs the term analogy ( analogia ) and discusses analogical predication , he never talks about analogical reasoning or analogical arguments per se . He does, however, identify two argument forms, the argument from example ( paradeigma ) and the argument from likeness ( homoiotes ), both closely related to what would we now recognize as an analogical argument.

The argument from example ( paradeigma ) is described in the Rhetoric and the Prior Analytics :

Enthymemes based upon example are those which proceed from one or more similar cases, arrive at a general proposition, and then argue deductively to a particular inference. ( Rhetoric 1402b15) Let \(A\) be evil, \(B\) making war against neighbours, \(C\) Athenians against Thebans, \(D\) Thebans against Phocians. If then we wish to prove that to fight with the Thebans is an evil, we must assume that to fight against neighbours is an evil. Conviction of this is obtained from similar cases, e.g., that the war against the Phocians was an evil to the Thebans. Since then to fight against neighbours is an evil, and to fight against the Thebans is to fight against neighbours, it is clear that to fight against the Thebans is an evil. ( Pr. An. 69a1)

Aristotle notes two differences between this argument form and induction (69a15ff.): it “does not draw its proof from all the particular cases” (i.e., it is not a “complete” induction), and it requires an additional (deductively valid) syllogism as the final step. The argument from example thus amounts to single-case induction followed by deductive inference. It has the following structure (using \(\supset\) for the conditional):

[a tree diagram where S is source domain and T is target domain. First node is P(S)&Q(S) in the lower left corner. It is connected by a dashed arrow to (x)(P(x) superset Q(x)) in the top middle which in turn connects by a solid arrow to P(T) and on the next line P(T) superset Q(T) in the lower right. It in turn is connected by a solid arrow to Q(T) below it.]

In the terminology of §2.2, \(P\) is the positive analogy and \(Q\) is the hypothetical analogy. In Aristotle’s example, \(S\) (the source) is war between Phocians and Thebans, \(T\) (the target) is war between Athenians and Thebans, \(P\) is war between neighbours, and \(Q\) is evil. The first inference (dashed arrow) is inductive; the second and third (solid arrows) are deductively valid.

The paradeigma has an interesting feature: it is amenable to an alternative analysis as a purely deductive argument form. Let us concentrate on Aristotle’s assertion, “we must assume that to fight against neighbours is an evil,” represented as \(\forall x(P(x) \supset Q(x))\). Instead of regarding this intermediate step as something reached by induction from a single case, we might instead regard it as a hidden presupposition. This transforms the paradeigma into a syllogistic argument with a missing or enthymematic premise, and our attention shifts to possible means for establishing that premise (with single-case induction as one such means). Construed in this way, Aristotle’s paradeigma argument foreshadows deductive analyses of analogical reasoning (see §4.1 ).

The argument from likeness ( homoiotes ) seems to be closer than the paradeigma to our contemporary understanding of analogical arguments. This argument form receives considerable attention in Topics I, 17 and 18 and again in VIII, 1. The most important passage is the following.

Try to secure admissions by means of likeness; for such admissions are plausible, and the universal involved is less patent; e.g. that as knowledge and ignorance of contraries is the same, so too perception of contraries is the same; or vice versa, that since the perception is the same, so is the knowledge also. This argument resembles induction, but is not the same thing; for in induction it is the universal whose admission is secured from the particulars, whereas in arguments from likeness, what is secured is not the universal under which all the like cases fall. ( Topics 156b10–17)

This passage occurs in a work that offers advice for framing dialectical arguments when confronting a somewhat skeptical interlocutor. In such situations, it is best not to make one’s argument depend upon securing agreement about any universal proposition. The argument from likeness is thus clearly distinct from the paradeigma , where the universal proposition plays an essential role as an intermediate step in the argument. The argument from likeness, though logically less straightforward than the paradeigma , is exactly the sort of analogical reasoning we want when we are unsure about underlying generalizations.

In Topics I 17, Aristotle states that any shared attribute contributes some degree of likeness. It is natural to ask when the degree of likeness between two things is sufficiently great to warrant inferring a further likeness. In other words, when does the argument from likeness succeed? Aristotle does not answer explicitly, but a clue is provided by the way he justifies particular arguments from likeness. As Lloyd (1966) has observed, Aristotle typically justifies such arguments by articulating a (sometimes vague) causal principle which governs the two phenomena being compared. For example, Aristotle explains the saltiness of the sea, by analogy with the saltiness of sweat, as a kind of residual earthy stuff exuded in natural processes such as heating. The common principle is this:

Everything that grows and is naturally generated always leaves a residue, like that of things burnt, consisting in this sort of earth. ( Mete 358a17)

From this method of justification, we might conjecture that Aristotle believes that the important similarities are those that enter into such general causal principles.

Summarizing, Aristotle’s theory provides us with four important and influential criteria for the evaluation of analogical arguments:

  • The strength of an analogy depends upon the number of similarities.
  • Similarity reduces to identical properties and relations.
  • Good analogies derive from underlying common causes or general laws.
  • A good analogical argument need not pre-suppose acquaintance with the underlying universal (generalization).

These four principles form the core of a common-sense model for evaluating analogical arguments (which is not to say that they are correct; indeed, the first three will shortly be called into question). The first, as we have seen, appears regularly in textbook discussions of analogy. The second is largely taken for granted, with important exceptions in computational models of analogy ( §3.4 ). Versions of the third are found in most sophisticated theories. The final point, which distinguishes the argument from likeness and the argument from example, is endorsed in many discussions of analogy (e.g., Quine and Ullian 1970).

A slight generalization of Aristotle’s first principle helps to prepare the way for discussion of later developments. As that principle suggests, Aristotle, in common with just about everyone else who has written about analogical reasoning, organizes his analysis of the argument form around overall similarity. In the terminology of section 2.2, horizontal relationships drive the reasoning: the greater the overall similarity of the two domains, the stronger the analogical argument . Hume makes the same point, though stated negatively, in his Dialogues Concerning Natural Religion :

Wherever you depart, in the least, from the similarity of the cases, you diminish proportionably the evidence; and may at last bring it to a very weak analogy, which is confessedly liable to error and uncertainty. (1779/1947: 144)

Most theories of analogy agree with Aristotle and Hume on this general point. Disagreement relates to the appropriate way of measuring overall similarity. Some theories assign greatest weight to material analogy , which refers to shared, and typically observable, features. Others give prominence to formal analogy , emphasizing high-level structural correspondence. The next two sub-sections discuss representative accounts that illustrate these two approaches.

Hesse (1966) offers a sharpened version of Aristotle’s theory, specifically focused on analogical arguments in the sciences. She formulates three requirements that an analogical argument must satisfy in order to be acceptable:

  • Requirement of material analogy . The horizontal relations must include similarities between observable properties.
  • Causal condition . The vertical relations must be causal relations “in some acceptable scientific sense” (1966: 87).
  • No-essential-difference condition . The essential properties and causal relations of the source domain must not have been shown to be part of the negative analogy.

3.3.1 Requirement of material analogy

For Hesse, an acceptable analogical argument must include “observable similarities” between domains, which she refers to as material analogy . Material analogy is contrasted with formal analogy . Two domains are formally analogous if both are “interpretations of the same formal theory” (1966: 68). Nomic isomorphism (Hempel 1965) is a special case in which the physical laws governing two systems have identical mathematical form. Heat and fluid flow exhibit nomic isomorphism. A second example is the analogy between the flow of electric current in a wire and fluid in a pipe. Ohm’s law

states that voltage difference along a wire equals current times a constant resistance. This has the same mathematical form as Poiseuille’s law (for ideal fluids):

which states that the pressure difference along a pipe equals the volumetric flow rate times a constant. Both of these systems can be represented by a common equation. While formal analogy is linked to common mathematical structure, it should not be limited to nomic isomorphism (Bartha 2010: 209). The idea of formal analogy generalizes to cases where there is a common mathematical structure between models for two systems. Bartha offers an even more liberal definition (2010: 195): “Two features are formally similar if they occupy corresponding positions in formally analogous theories. For example, pitch in the theory of sound corresponds to color in the theory of light.”

By contrast, material analogy consists of what Hesse calls “observable” or “pre-theoretic” similarities. These are horizontal relationships of similarity between properties of objects in the source and the target. Similarities between echoes (sound) and reflection (light), for instance, were recognized long before we had any detailed theories about these phenomena. Hesse (1966, 1988) regards such similarities as metaphorical relationships between the two domains and labels them “pre-theoretic” because they draw on personal and cultural experience. We have both material and formal analogies between sound and light, and it is significant for Hesse that the former are independent of the latter.

There are good reasons not to accept Hesse’s requirement of material analogy, construed in this narrow way. First, it is apparent that formal analogies are the starting point in many important inferences. That is certainly the case in mathematics, a field in which material analogy, in Hesse’s sense, plays no role at all. Analogical arguments based on formal analogy have also been extremely influential in physics (Steiner 1989, 1998).

In Norton’s broad sense, however, ‘material analogy’ simply refers to similarities rooted in factual knowledge of the source and target domains. With reference to this broader meaning, Hesse proposes two additional material criteria.

3.3.2 Causal condition

Hesse requires that the hypothetical analogy, the feature transferred to the target domain, be causally related to the positive analogy. In her words, the essential requirement for a good argument from analogy is “a tendency to co-occurrence”, i.e., a causal relationship. She states the requirement as follows:

The vertical relations in the model [source] are causal relations in some acceptable scientific sense, where there are no compelling a priori reasons for denying that causal relations of the same kind may hold between terms of the explanandum [target]. (1966: 87)

The causal condition rules out analogical arguments where there is no causal knowledge of the source domain. It derives support from the observation that many analogies do appear to involve a transfer of causal knowledge.

The causal condition is on the right track, but is arguably too restrictive. For example, it rules out analogical arguments in mathematics. Even if we limit attention to the empirical sciences, persuasive analogical arguments may be founded upon strong statistical correlation in the absence of any known causal connection. Consider ( Example 11 ) Benjamin Franklin’s prediction, in 1749, that pointed metal rods would attract lightning, by analogy with the way they attracted the “electrical fluid” in the laboratory:

Electrical fluid agrees with lightning in these particulars: 1. Giving light. 2. Colour of the light. 3. Crooked direction. 4. Swift motion. 5. Being conducted by metals. 6. Crack or noise in exploding. 7. Subsisting in water or ice. 8. Rending bodies it passes through. 9. Destroying animals. 10. Melting metals. 11. Firing inflammable substances. 12. Sulphureous smell.—The electrical fluid is attracted by points.—We do not know whether this property is in lightning.—But since they agree in all the particulars wherein we can already compare them, is it not probable they agree likewise in this? Let the experiment be made. ( Benjamin Franklin’s Experiments , 334)

Franklin’s hypothesis was based on a long list of properties common to the target (lightning) and source (electrical fluid in the laboratory). There was no known causal connection between the twelve “particulars” and the thirteenth property, but there was a strong correlation. Analogical arguments may be plausible even where there are no known causal relations.

3.3.3 No-essential-difference condition

Hesse’s final requirement is that the “essential properties and causal relations of the [source] have not been shown to be part of the negative analogy” (1966: 91). Hesse does not provide a definition of “essential,” but suggests that a property or relation is essential if it is “causally closely related to the known positive analogy.” For instance, an analogy with fluid flow was extremely influential in developing the theory of heat conduction. Once it was discovered that heat was not conserved, however, the analogy became unacceptable (according to Hesse) because conservation was so central to the theory of fluid flow.

This requirement, though once again on the right track, seems too restrictive. It can lead to the rejection of a good analogical argument. Consider the analogy between a two-dimensional rectangle and a three-dimensional box ( Example 7 ). Broadening Hesse’s notion, it seems that there are many ‘essential’ differences between rectangles and boxes. This does not mean that we should reject every analogy between rectangles and boxes out of hand. The problem derives from the fact that Hesse’s condition is applied to the analogy relation independently of the use to which that relation is put. What counts as essential should vary with the analogical argument. Absent an inferential context, it is impossible to evaluate the importance or ‘essentiality’ of similarities and differences.

Despite these weaknesses, Hesse’s ‘material’ criteria constitute a significant advance in our understanding of analogical reasoning. The causal condition and the no-essential-difference condition incorporate local factors, as urged by Norton, into the assessment of analogical arguments. These conditions, singly or taken together, imply that an analogical argument can fail to generate any support for its conclusion, even when there is a non-empty positive analogy. Hesse offers no theory about the ‘degree’ of analogical support. That makes her account one of the few that is oriented towards the modal, rather than probabilistic, use of analogical arguments ( §2.3 ).

Many people take the concept of model-theoretic isomorphism to set the standard for thinking about similarity and its role in analogical reasoning. They propose formal criteria for evaluating analogies, based on overall structural or syntactical similarity. Let us refer to theories oriented around such criteria as structuralist .

A number of leading computational models of analogy are structuralist. They are implemented in computer programs that begin with (or sometimes build) representations of the source and target domains, and then construct possible analogy mappings. Analogical inferences emerge as a consequence of identifying the ‘best mapping.’ In terms of criteria for analogical reasoning, there are two main ideas. First, the goodness of an analogical argument is based on the goodness of the associated analogy mapping . Second, the goodness of the analogy mapping is given by a metric that indicates how closely it approximates isomorphism.

The most influential structuralist theory has been Gentner’s structure-mapping theory, implemented in a program called the structure-mapping engine (SME). In its original form (Gentner 1983), the theory assesses analogies on purely structural grounds. Gentner asserts:

Analogies are about relations, rather than simple features. No matter what kind of knowledge (causal models, plans, stories, etc.), it is the structural properties (i.e., the interrelationships between the facts) that determine the content of an analogy. (Falkenhainer, Forbus, and Gentner 1989/90: 3)

In order to clarify this thesis, Gentner introduces a distinction between properties , or monadic predicates, and relations , which have multiple arguments. She further distinguishes among different orders of relations and functions, defined inductively (in terms of the order of the relata or arguments). The best mapping is determined by systematicity : the extent to which it places higher-order relations, and items that are nested in higher-order relations, in correspondence. Gentner’s Systematicity Principle states:

A predicate that belongs to a mappable system of mutually interconnecting relationships is more likely to be imported into the target than is an isolated predicate. (1983: 163)

A systematic analogy (one that places high-order relations and their components in correspondence) is better than a less systematic analogy. Hence, an analogical inference has a degree of plausibility that increases monotonically with the degree of systematicity of the associated analogy mapping. Gentner’s fundamental criterion for evaluating candidate analogies (and analogical inferences) thus depends solely upon the syntax of the given representations and not at all upon their content.

Later versions of the structure-mapping theory incorporate refinements (Forbus, Ferguson, and Gentner 1994; Forbus 2001; Forbus et al. 2007; Forbus et al. 2008; Forbus et al 2017). For example, the earliest version of the theory is vulnerable to worries about hand-coded representations of source and target domains. Gentner and her colleagues have attempted to solve this problem in later work that generates LISP representations from natural language text (see Turney 2008 for a different approach).

The most important challenges for the structure-mapping approach relate to the Systematicity Principle itself. Does the value of an analogy derive entirely, or even chiefly, from systematicity? There appear to be two main difficulties with this view. First: it is not always appropriate to give priority to systematic, high-level relational matches. Material criteria, and notably what Gentner refers to as “superficial feature matches,” can be extremely important in some types of analogical reasoning, such as ethnographic analogies which are based, to a considerable degree, on surface resemblances between artifacts. Second and more significantly: systematicity seems to be at best a fallible marker for good analogies rather than the essence of good analogical reasoning.

Greater systematicity is neither necessary nor sufficient for a more plausible analogical inference. It is obvious that increased systematicity is not sufficient for increased plausibility. An implausible analogy can be represented in a form that exhibits a high degree of structural parallelism. High-order relations can come cheap, as we saw with Achinstein’s “swan” example ( §2.4 ).

More pointedly, increased systematicity is not necessary for greater plausibility. Indeed, in causal analogies, it may even weaken the inference. That is because systematicity takes no account of the type of causal relevance, positive or negative. (McKay 1993) notes that microbes have been found in frozen lakes in Antarctica; by analogy, simple life forms might exist on Mars. Freezing temperatures are preventive or counteracting causes; they are negatively relevant to the existence of life. The climate of Mars was probably more favorable to life 3.5 billion years ago than it is today, because temperatures were warmer. Yet the analogy between Antarctica and present-day Mars is more systematic than the analogy between Antarctica and ancient Mars. According to the Systematicity Principle , the analogy with Antarctica provides stronger support for life on Mars today than it does for life on ancient Mars.

The point of this example is that increased systematicity does not always increase plausibility, and reduced systematicity does not always decrease it (see Lee and Holyoak 2008). The more general point is that systematicity can be misleading, unless we take into account the nature of the relationships between various factors and the hypothetical analogy. Systematicity does not magically produce or explain the plausibility of an analogical argument. When we reason by analogy, we must determine which features of both domains are relevant and how they relate to the analogical conclusion. There is no short-cut via syntax.

Schlimm (2008) offers an entirely different critique of the structure-mapping theory from the perspective of analogical reasoning in mathematics—a domain where one might expect a formal approach such as structure mapping to perform well. Schlimm introduces a simple distinction: a domain is object-rich if the number of objects is greater than the number of relations (and properties), and relation-rich otherwise. Proponents of the structure-mapping theory typically focus on relation-rich examples (such as the analogy between the solar system and the atom). By contrast, analogies in mathematics typically involve domains with an enormous number of objects (like the real numbers), but relatively few relations and functions (addition, multiplication, less-than).

Schlimm provides an example of an analogical reasoning problem in group theory that involves a single relation in each domain. In this case, attaining maximal systematicity is trivial. The difficulty is that, compatible with maximal systematicity, there are different ways in which the objects might be placed in correspondence. The structure-mapping theory appears to yield the wrong inference. We might put the general point as follows: in object-rich domains, systematicity ceases to be a reliable guide to plausible analogical inference.

3.5.1 Connectionist models

During the past thirty-five years, cognitive scientists have conducted extensive research on analogy. Gentner’s SME is just one of many computational theories, implemented in programs that construct and use analogies. Three helpful anthologies that span this period are Helman 1988; Gentner, Holyoak, and Kokinov 2001; and Kokinov, Holyoak, and Gentner 2009.

One predominant objective of this research has been to model the cognitive processes involved in using analogies. Early models tended to be oriented towards “understanding the basic constraints that govern human analogical thinking” (Hummel and Holyoak 1997: 458). Recent connectionist models have been directed towards uncovering the psychological mechanisms that come into play when we use analogies: retrieval of a relevant source domain, analogical mapping across domains, and transfer of information and learning of new categories or schemas.

In some cases, such as the structure-mapping theory (§3.4), this research overlaps directly with the normative questions that are the focus of this entry; indeed, Gentner’s Systematicity Principle may be interpreted normatively. In other cases, we might view the projects as displacing those traditional normative questions with up-to-date, computational forms of naturalized epistemology . Two approaches are singled out here because both raise important challenges to the very idea of finding sharp answers to those questions, and both suggest that connectionist models offer a more fruitful approach to understanding analogical reasoning.

The first is the constraint-satisfaction model (also known as the multiconstraint theory ), developed by Holyoak and Thagard (1989, 1995). Like Gentner, Holyoak and Thagard regard the heart of analogical reasoning as analogy mapping , and they stress the importance of systematicity, which they refer to as a structural constraint. Unlike Gentner, they acknowledge two additional types of constraints. Pragmatic constraints take into account the goals and purposes of the agent, recognizing that “the purpose will guide selection” of relevant similarities. Semantic constraints represent estimates of the degree to which people regard source and target items as being alike, rather like Hesse’s “pre-theoretic” similarities.

The novelty of the multiconstraint theory is that these structural , semantic and pragmatic constraints are implemented not as rigid rules, but rather as ‘pressures’ supporting or inhibiting potential pairwise correspondences. The theory is implemented in a connectionist program called ACME (Analogical Constraint Mapping Engine), which assigns an initial activation value to each possible pairing between elements in the source and target domains (based on semantic and pragmatic constraints), and then runs through cycles that update the activation values based on overall coherence (structural constraints). The best global analogy mapping emerges under the pressure of these constraints. Subsequent connectionist models, such as Hummel and Holyoak’s LISA program (1997, 2003), have made significant advances and hold promise for offering a more complete theory of analogical reasoning.

The second example is Hofstadter and Mitchell’s Copycat program (Hofstadter 1995; Mitchell 1993). The program is “designed to discover insightful analogies, and to do so in a psychologically realistic way” (Hofstadter 1995: 205). Copycat operates in the domain of letter-strings. The program handles the following type of problem:

Suppose the letter-string abc were changed to abd ; how would you change the letter-string ijk in “the same way”?

Most people would answer ijl , since it is natural to think that abc was changed to abd by the “transformation rule”: replace the rightmost letter with its successor. Alternative answers are possible, but do not agree with most people’s sense of what counts as the natural analogy.

Hofstadter and Mitchell believe that analogy-making is in large part about the perception of novel patterns, and that such perception requires concepts with “fluid” boundaries. Genuine analogy-making involves “slippage” of concepts. The Copycat program combines a set of core concepts pertaining to letter-sequences ( successor , leftmost and so forth) with probabilistic “halos” that link distinct concepts dynamically. Orderly structures emerge out of random low-level processes and the program produces plausible solutions. Copycat thus shows that analogy-making can be modeled as a process akin to perception, even if the program employs mechanisms distinct from those in human perception.

The multiconstraint theory and Copycat share the idea that analogical cognition involves cognitive processes that operate below the level of abstract reasoning. Both computational models—to the extent that they are capable of performing successful analogical reasoning—challenge the idea that a successful model of analogical reasoning must take the form of a set of quasi-logical criteria. Efforts to develop a quasi-logical theory of analogical reasoning, it might be argued, have failed. In place of faulty inference schemes such as those described earlier ( §2.2 , §2.4 ), computational models substitute procedures that can be judged on their performance rather than on traditional philosophical standards.

In response to this argument, we should recognize the value of the connectionist models while acknowledging that we still need a theory that offers normative principles for evaluating analogical arguments. In the first place, even if the construction and recognition of analogies are largely a matter of perception, this does not eliminate the need for subsequent critical evaluation of analogical inferences. Second and more importantly, we need to look not just at the construction of analogy mappings but at the ways in which individual analogical arguments are debated in fields such as mathematics, physics, philosophy and the law. These high-level debates require reasoning that bears little resemblance to the computational processes of ACME or Copycat. (Ashley’s HYPO (Ashley 1990) is one example of a non-connectionist program that focuses on this aspect of analogical reasoning.) There is, accordingly, room for both computational and traditional philosophical models of analogical reasoning.

3.5.2 Articulation model

Most prominent theories of analogy, philosophical and computational, are based on overall similarity between source and target domains—defined in terms of some favoured subset of Hesse’s horizontal relations (see §2.2 ). Aristotle and Mill, whose approach is echoed in textbook discussions, suggest counting similarities. Hesse’s theory ( §3.3 ) favours “pre-theoretic” correspondences. The structure-mapping theory and its successors ( §3.4 ) look to systematicity, i.e., to correspondences involving complex, high-level networks of relations. In each of these approaches, the problem is twofold: overall similarity is not a reliable guide to plausibility, and it fails to explain the plausibility of any analogical argument.

Bartha’s articulation model (2010) proposes a different approach, beginning not with horizontal relations, but rather with a classification of analogical arguments on the basis of the vertical relations within each domain. The fundamental idea is that a good analogical argument must satisfy two conditions:

Prior Association . There must be a clear connection, in the source domain, between the known similarities (the positive analogy) and the further similarity that is projected to hold in the target domain (the hypothetical analogy). This relationship determines which features of the source are critical to the analogical inference.

Potential for Generalization . There must be reason to think that the same kind of connection could obtain in the target domain. More pointedly: there must be no critical disanalogy between the domains.

The first order of business is to make the prior association explicit. The standards of explicitness vary depending on the nature of this association (causal relation, mathematical proof, functional relationship, and so forth). The two general principles are fleshed out via a set of subordinate models that allow us to identify critical features and hence critical disanalogies.

To see how this works, consider Example 7 (Rectangles and boxes). In this analogical argument, the source domain is two-dimensional geometry: we know that of all rectangles with a fixed perimeter, the square has maximum area. The target domain is three-dimensional geometry: by analogy, we conjecture that of all boxes with a fixed surface area, the cube has maximum volume. This argument should be evaluated not by counting similarities, looking to pre-theoretic resemblances between rectangles and boxes, or constructing connectionist representations of the domains and computing a systematicity score for possible mappings. Instead, we should begin with a precise articulation of the prior association in the source domain, which amounts to a specific proof for the result about rectangles. We should then identify, relative to that proof, the critical features of the source domain: namely, the concepts and assumptions used in the proof. Finally, we should assess the potential for generalization: whether, in the three-dimensional setting, those critical features are known to lack analogues in the target domain. The articulation model is meant to reflect the conversations that can and do take place between an advocate and a critic of an analogical argument.

3.6.1 Norton’s material theory of analogy

As noted in §2.4 , Norton rejects analogical inference rules. But even if we agree with Norton on this point, we might still be interested in having an account that gives us guidelines for evaluating analogical arguments. How does Norton’s approach fare on this score?

According to Norton, each analogical argument is warranted by local facts that must be investigated and justified empirically. First, there is “the fact of the analogy”: in practice, a low-level uniformity that embraces both the source and target systems. Second, there are additional factual properties of the target system which, when taken together with the uniformity, warrant the analogical inference. Consider Galileo’s famous inference ( Example 12 ) that there are mountains on the moon (Galileo 1610). Through his newly invented telescope, Galileo observed points of light on the moon ahead of the advancing edge of sunlight. Noting that the same thing happens on earth when sunlight strikes the mountains, he concluded that there must be mountains on the moon and even provided a reasonable estimate of their height. In this example, Norton tells us, the fact of the analogy is that shadows and other optical phenomena are generated in the same way on the earth and on the moon; the additional fact about the target is the existence of points of light ahead of the advancing edge of sunlight on the moon.

What are the implications of Norton’s material theory when it comes to evaluating analogical arguments? The fact of the analogy is a local uniformity that powers the inference. Norton’s theory works well when such a uniformity is patent or naturally inferred. It doesn’t work well when the uniformity is itself the target (rather than the driver ) of the inference. That happens with explanatory analogies such as Example 5 (the Acoustical Analogy ), and mathematical analogies such as Example 7 ( Rectangles and Boxes ). Similarly, the theory doesn’t work well when the underlying uniformity is unclear, as in Example 2 ( Life on other Planets ), Example 4 ( Clay Pots ), and many other cases. In short, if Norton’s theory is accepted, then for most analogical arguments there are no useful evaluation criteria.

3.6.2 Field-specific criteria

For those who sympathize with Norton’s skepticism about universal inductive schemes and theories of analogical reasoning, yet recognize that his approach may be too local, an appealing strategy is to move up one level. We can aim for field-specific “working logics” (Toulmin 1958; Wylie and Chapman 2016; Reiss 2015). This approach has been adopted by philosophers of archaeology, evolutionary biology and other historical sciences (Wylie and Chapman 2016; Currie 2013; Currie 2016; Currie 2018). In place of schemas, we find ‘toolkits’, i.e., lists of criteria for evaluating analogical reasoning.

For example, Currie (2016) explores in detail the use of ethnographic analogy ( Example 13 ) between shamanistic motifs used by the contemporary San people and similar motifs in ancient rock art, found both among ancestors of the San (direct historical analogy) and in European rock art (indirect historical analogy). Analogical arguments support the hypothesis that in each of these cultures, rock art symbolizes hallucinogenic experiences. Currie examines criteria that focus on assumptions about stability of cultural traits and environment-culture relationships. Currie (2016, 2018) and Wylie (Wylie and Chapman 2016) also stress the importance of robustness reasoning that combines analogical arguments of moderate strength with other forms of evidence to yield strong conclusions.

Practice-based approaches can thus yield specific guidelines unlikely to be matched by any general theory of analogical reasoning. One caveat is worth mentioning. Field-specific criteria for ethnographic analogy are elicited against a background of decades of methodological controversy (Wylie and Chapman 2016). Critics and defenders of ethnographic analogy have appealed to general models of scientific method (e.g., hypothetico-deductive method or Bayesian confirmation). To advance the methodological debate, practice-based approaches must either make connections to these general models or explain why the lack of any such connection is unproblematic.

3.6.3 Formal analogies in physics

Close attention to analogical arguments in practice can also provide valuable challenges to general ideas about analogical inference. In an interesting discussion, Steiner (1989, 1998) suggests that many of the analogies that played a major role in early twentieth-century physics count as “Pythagorean.” The term is meant to connote mathematical mysticism: a “Pythagorean” analogy is a purely formal analogy, one founded on mathematical similarities that have no known physical basis at the time it is proposed. One example is Schrödinger’s use of analogy ( Example 14 ) to “guess” the form of the relativistic wave equation. In Steiner’s view, Schrödinger’s reasoning relies upon manipulations and substitutions based on purely mathematical analogies. Steiner argues that the success, and even the plausibility, of such analogies “evokes, or should evoke, puzzlement” (1989: 454). Both Hesse (1966) and Bartha (2010) reject the idea that a purely formal analogy, with no physical significance, can support a plausible analogical inference in physics. Thus, Steiner’s arguments provide a serious challenge.

Bartha (2010) suggests a response: we can decompose Steiner’s examples into two or more steps, and then establish that at least one step does, in fact, have a physical basis. Fraser (forthcoming), however, offers a counterexample that supports Steiner’s position. Complex analogies between classical statistical mechanics (CSM) and quantum field theory (QFT) have played a crucial role in the development and application of renormalization group (RG) methods in both theories ( Example 15 ). Fraser notes substantial physical disanalogies between CSM and QFT, and concludes that the reasoning is based entirely on formal analogies.

4. Philosophical foundations for analogical reasoning

What philosophical basis can be provided for reasoning by analogy? What justification can be given for the claim that analogical arguments deliver plausible conclusions? There have been several ideas for answering this question. One natural strategy assimilates analogical reasoning to some other well-understood argument pattern, a form of deductive or inductive reasoning ( §4.1 , §4.2 ). A few philosophers have explored the possibility of a priori justification ( §4.3 ). A pragmatic justification may be available for practical applications of analogy, notably in legal reasoning ( §4.4 ).

Any attempt to provide a general justification for analogical reasoning faces a basic dilemma. The demands of generality require a high-level formulation of the problem and hence an abstract characterization of analogical arguments, such as schema (4). On the other hand, as noted previously, many analogical arguments that conform to schema (4) are bad arguments. So a general justification of analogical reasoning cannot provide support for all arguments that conform to (4), on pain of proving too much. Instead, it must first specify a subset of putatively ‘good’ analogical arguments, and link the general justification to this specified subset. The problem of justification is linked to the problem of characterizing good analogical arguments . This difficulty afflicts some of the strategies described in this section.

Analogical reasoning may be cast in a deductive mold. If successful, this strategy neatly solves the problem of justification. A valid deductive argument is as good as it gets.

An early version of the deductivist approach is exemplified by Aristotle’s treatment of the argument from example ( §3.2 ), the paradeigma . On this analysis, an analogical argument between source domain \(S\) and target \(T\) begins with the assumption of positive analogy \(P(S)\) and \(P(T)\), as well as the additional information \(Q(S)\). It proceeds via the generalization \(\forall x(P(x) \supset Q(x))\) to the conclusion: \(Q(T)\). Provided we can treat that intermediate generalization as an independent premise, we have a deductively valid argument. Notice, though, that the existence of the generalization renders the analogy irrelevant. We can derive \(Q(T)\) from the generalization and \(P(T)\), without any knowledge of the source domain. The literature on analogy in argumentation theory ( §2.2 ) offers further perspectives on this type of analysis, and on the question of whether analogical arguments are properly characterized as deductive.

Some recent analyses follow Aristotle in treating analogical arguments as reliant upon extra (sometimes tacit) premises, typically drawn from background knowledge, that convert the inference into a deductively valid argument––but without making the source domain irrelevant. Davies and Russell introduce a version that relies upon what they call determination rules (Russell 1986; Davies and Russell 1987; Davies 1988). Suppose that \(Q\) and \(P_1 , \ldots ,P_m\) are variables, and we have background knowledge that the value of \(Q\) is determined by the values of \(P_1 , \ldots ,P_m\). In the simplest case, where \(m = 1\) and both \(P\) and \(Q\) are binary Boolean variables, this reduces to

i.e., whether or not \(P\) holds determines whether or not \(Q\) holds. More generally, the form of a determination rule is

i.e., \(Q\) is a function of \(P_1,\ldots\), \(P_m\). If we assume such a rule as part of our background knowledge, then an analogical argument with conclusion \(Q(T)\) is deductively valid. More precisely, and allowing for the case where \(Q\) is not a binary variable: if we have such a rule, and also premises stating that the source \(S\) agrees with the target \(T\) on all of the values \(P_i\), then we may validly infer that \(Q(T) = Q(S)\).

The “determination rule” analysis provides a clear and simple justification for analogical reasoning. Note that, in contrast to the Aristotelian analysis via the generalization \(\forall x(P(x) \supset Q(x))\), a determination rule does not trivialize the analogical argument. Only by combining the rule with information about the source domain can we derive the value of \(Q(T)\). To illustrate by adapting one of the examples given by Russell and Davies ( Example 16 ), let’s suppose that the value \((Q)\) of a used car (relative to a particular buyer) is determined by its year, make, mileage, condition, color and accident history (the variables \(P_i)\). It doesn’t matter if one or more of these factors are redundant or irrelevant. Provided two cars are indistinguishable on each of these points, they will have the same value. Knowledge of the source domain is necessary; we can’t derive the value of the second car from the determination rule alone. Weitzenfeld (1984) proposes a variant of this approach, advancing the slightly more general thesis that analogical arguments are deductive arguments with a missing (enthymematic) premise that amounts to a determination rule.

Do determination rules give us a solution to the problem of providing a justification for analogical arguments? In general: no. Analogies are commonly applied to problems such as Example 8 ( morphine and meperidine ), where we are not even aware of all relevant factors, let alone in possession of a determination rule. Medical researchers conduct drug tests on animals without knowing all attributes that might be relevant to the effects of the drug. Indeed, one of the main objectives of such testing is to guard against reactions unanticipated by theory. On the “determination rule” analysis, we must either limit the scope of such arguments to cases where we have a well-supported determination rule, or focus attention on formulating and justifying an appropriate determination rule. For cases such as animal testing, neither option seems realistic.

Recasting analogy as a deductive argument may help to bring out background assumptions, but it makes little headway with the problem of justification. That problem re-appears as the need to state and establish the plausibility of a determination rule, and that is at least as difficult as justifying the original analogical argument.

Some philosophers have attempted to portray, and justify, analogical reasoning in terms of some well-understood inductive argument pattern. There have been three moderately popular versions of this strategy. The first treats analogical reasoning as generalization from a single case. The second treats it as a kind of sampling argument. The third recognizes the argument from analogy as a distinctive form, but treats past successes as evidence for future success.

4.2.1 Single-case induction

Let’s reconsider Aristotle’s argument from example or paradeigma ( §3.2 ), but this time regard the generalization as justified via induction from a single case (the source domain). Can such a simple analysis of analogical arguments succeed? In general: no.

A single instance can sometimes lead to a justified generalization. Cartwright (1992) argues that we can sometimes generalize from a single careful experiment, “where we have sufficient control of the materials and our knowledge of the requisite background assumptions is secure” (51). Cartwright thinks that we can do this, for example, in experiments with compounds that have stable “Aristotelian natures.” In a similar spirit, Quine (1969) maintains that we can have instantial confirmation when dealing with natural kinds.

Even if we accept that there are such cases, the objection to understanding all analogical arguments as single-case induction is obvious: the view is simply too restrictive. Most analogical arguments will not meet the requisite conditions. We may not know that we are dealing with a natural kind or Aristotelian nature when we make the analogical argument. We may not know which properties are essential. An insistence on the ‘single-case induction’ analysis of analogical reasoning is likely to lead to skepticism (Agassi 1964, 1988).

Interpreting the argument from analogy as single-case induction is also counter-productive in another way. The simplistic analysis does nothing to advance the search for criteria that help us to distinguish between relevant and irrelevant similarities, and hence between good and bad analogical arguments.

4.2.2 Sampling arguments

On the sampling conception of analogical arguments, acknowledged similarities between two domains are treated as statistically relevant evidence for further similarities. The simplest version of the sampling argument is due to Mill (1843/1930). An argument from analogy, he writes, is “a competition between the known points of agreement and the known points of difference.” Agreement of \(A\) and \(B\) in 9 out of 10 properties implies a probability of 0.9 that \(B\) will possess any other property of \(A\): “we can reasonably expect resemblance in the same proportion” (367). His only restriction has to do with sample size: we must be relatively knowledgeable about both \(A\) and \(B\). Mill saw no difficulty in using analogical reasoning to infer characteristics of newly discovered species of plants or animals, given our extensive knowledge of botany and zoology. But if the extent of unascertained properties of \(A\) and \(B\) is large, similarity in a small sample would not be a reliable guide; hence, Mill’s dismissal of Reid’s argument about life on other planets ( Example 2 ).

The sampling argument is presented in more explicit mathematical form by Harrod (1956). The key idea is that the known properties of \(S\) (the source domain) may be considered a random sample of all \(S\)’s properties—random, that is, with respect to the attribute of also belonging to \(T\) (the target domain). If the majority of known properties that belong to \(S\) also belong to \(T\), then we should expect most other properties of \(S\) to belong to \(T\), for it is unlikely that we would have come to know just the common properties. In effect, Harrod proposes a binomial distribution, modeling ‘random selection’ of properties on random selection of balls from an urn.

There are grave difficulties with Harrod’s and Mill’s analyses. One obvious difficulty is the counting problem : the ‘population’ of properties is poorly defined. How are we to count similarities and differences? The ratio of shared to total known properties varies dramatically according to how we do this. A second serious difficulty is the problem of bias : we cannot justify the assumption that the sample of known features is random. In the case of the urn, the selection process is arranged so that the result of each choice is not influenced by the agent’s intentions or purposes, or by prior choices. By contrast, the presentation of an analogical argument is always partisan. Bias enters into the initial representation of similarities and differences: an advocate of the argument will highlight similarities, while a critic will play up differences. The paradigm of repeated selection from an urn seems totally inappropriate. Additional variations of the sampling approach have been developed (e.g., Russell 1988), but ultimately these versions also fail to solve either the counting problem or the problem of bias.

4.2.3 Argument from past success

Section 3.6 discussed Steiner’s view that appeal to ‘Pythagorean’ analogies in physics “evokes, or should evoke, puzzlement” (1989: 454). Liston (2000) offers a possible response: physicists are entitled to use Pythagorean analogies on the basis of induction from their past success:

[The scientist] can admit that no one knows how [Pythagorean] reasoning works and argue that the very fact that similar strategies have worked well in the past is already reason enough to continue pursuing them hoping for success in the present instance. (200)

Setting aside familiar worries about arguments from success, the real problem here is to determine what counts as a similar strategy. In essence, that amounts to isolating the features of successful Pythagorean analogies. As we have seen (§2.4), nobody has yet provided a satisfactory scheme that characterizes successful analogical arguments, let alone successful Pythagorean analogical arguments.

An a priori approach traces the validity of a pattern of analogical reasoning, or of a particular analogical argument, to some broad and fundamental principle. Three such approaches will be outlined here.

The first is due to Keynes (1921). Keynes appeals to his famous Principle of the Limitation of Independent Variety, which he articulates as follows:

Armed with this Principle and some additional assumptions, Keynes is able to show that in cases where there is no negative analogy , knowledge of the positive analogy increases the (logical) probability of the conclusion. If there is a non-trivial negative analogy, however, then the probability of the conclusion remains unchanged, as was pointed out by Hesse (1966). Those familiar with Carnap’s theory of logical probability will recognize that in setting up his framework, Keynes settled on a measure that permits no learning from experience.

Hesse offers a refinement of Keynes’s strategy, once again along Carnapian lines. In her (1974), she proposes what she calls the Clustering Postulate : the assumption that our epistemic probability function has a built-in bias towards generalization. The objections to such postulates of uniformity are well-known (see Salmon 1967), but even if we waive them, her argument fails. The main objection here—which also applies to Keynes—is that a purely syntactic axiom such as the Clustering Postulate fails to discriminate between analogical arguments that are good and those that are clearly without value (according to Hesse’s own material criteria, for example).

A different a priori strategy, proposed by Bartha (2010), limits the scope of justification to analogical arguments that satisfy tentative criteria for ‘good’ analogical reasoning. The criteria are those specified by the articulation model ( §3.5 ). In simplified form, they require the existence of non-trivial positive analogy and no known critical disanalogy. The scope of Bartha’s argument is also limited to analogical arguments directed at establishing prima facie plausibility, rather than degree of probability.

Bartha’s argument rests on a principle of symmetry reasoning articulated by van Fraassen (1989: 236): “problems which are essentially the same must receive essentially the same solution.” A modal extension of this principle runs roughly as follows: if problems might be essentially the same, then they might have essentially the same solution. There are two modalities here. Bartha argues that satisfaction of the criteria of the articulation model is sufficient to establish the modality in the antecedent, i.e., that the source and target domains ‘might be essentially the same’ in relevant respects. He further suggests that prima facie plausibility provides a reasonable reading of the modality in the consequent, i.e., that the problems in the two domains ‘might have essentially the same solution.’ To call a hypothesis prima facie plausible is to elevate it to the point where it merits investigation, since it might be correct.

The argument is vulnerable to two sorts of concerns. First, there are questions about the interpretation of the symmetry principle. Second, there is a residual worry that this justification, like all the others, proves too much. The articulation model may be too vague or too permissive.

Arguably, the most promising available defense of analogical reasoning may be found in its application to case law (see Precedent and Analogy in Legal Reasoning ). Judicial decisions are based on the verdicts and reasoning that have governed relevantly similar cases, according to the doctrine of stare decisis (Levi 1949; Llewellyn 1960; Cross and Harris 1991; Sunstein 1993). Individual decisions by a court are binding on that court and lower courts; judges are obligated to decide future cases ‘in the same way.’ That is, the reasoning applied in an individual decision, referred to as the ratio decidendi , must be applied to similar future cases (see Example 10 ). In practice, of course, the situation is extremely complex. No two cases are identical. The ratio must be understood in the context of the facts of the original case, and there is considerable room for debate about its generality and its applicability to future cases. If a consensus emerges that a past case was wrongly decided, later judgments will distinguish it from new cases, effectively restricting the scope of the ratio to the original case.

The practice of following precedent can be justified by two main practical considerations. First, and above all, the practice is conservative : it provides a relatively stable basis for replicable decisions. People need to be able to predict the actions of the courts and formulate plans accordingly. Stare decisis serves as a check against arbitrary judicial decisions. Second, the practice is still reasonably progressive : it allows for the gradual evolution of the law. Careful judges distinguish bad decisions; new values and a new consensus can emerge in a series of decisions over time.

In theory, then, stare decisis strikes a healthy balance between conservative and progressive social values. This justification is pragmatic. It pre-supposes a common set of social values, and links the use of analogical reasoning to optimal promotion of those values. Notice also that justification occurs at the level of the practice in general; individual analogical arguments sometimes go astray. A full examination of the nature and foundations for stare decisis is beyond the scope of this entry, but it is worth asking the question: might it be possible to generalize the justification for stare decisis ? Is a parallel pragmatic justification available for analogical arguments in general?

Bartha (2010) offers a preliminary attempt to provide such a justification by shifting from social values to epistemic values. The general idea is that reasoning by analogy is especially well suited to the attainment of a common set of epistemic goals or values. In simple terms, analogical reasoning—when it conforms to certain criteria—achieves an excellent (perhaps optimal) balance between the competing demands of stability and innovation. It supports both conservative epistemic values, such as simplicity and coherence with existing belief, and progressive epistemic values, such as fruitfulness and theoretical unification (McMullin (1993) provides a classic list).

5. Beyond analogical arguments

As emphasized earlier, analogical reasoning takes in a great deal more than analogical arguments. In this section, we examine two broad contexts in which analogical reasoning is important.

The first, still closely linked to analogical arguments, is the confirmation of scientific hypotheses. Confirmation is the process by which a scientific hypothesis receives inductive support on the basis of evidence (see evidence , confirmation , and Bayes’ Theorem ). Confirmation may also signify the logical relationship of inductive support that obtains between a hypothesis \(H\) and a proposition \(E\) that expresses the relevant evidence. Can analogical arguments play a role, either in the process or in the logical relationship? Arguably yes (to both), but this role has to be delineated carefully, and several obstacles remain in the way of a clear account.

The second context is conceptual and theoretical development in cutting-edge scientific research. Analogies are used to suggest possible extensions of theoretical concepts and ideas. The reasoning is linked to considerations of plausibility, but there is no straightforward analysis in terms of analogical arguments.

How is analogical reasoning related to the confirmation of scientific hypotheses? The examples and philosophical discussion from earlier sections suggest that a good analogical argument can indeed provide support for a hypothesis. But there are good reasons to doubt the claim that analogies provide actual confirmation.

In the first place, there is a logical difficulty. To appreciate this, let us concentrate on confirmation as a relationship between propositions. Christensen (1999: 441) offers a helpful general characterization:

Some propositions seem to help make it rational to believe other propositions. When our current confidence in \(E\) helps make rational our current confidence in \(H\), we say that \(E\) confirms \(H\).

In the Bayesian model, ‘confidence’ is represented in terms of subjective probability. A Bayesian agent starts with an assignment of subjective probabilities to a class of propositions. Confirmation is understood as a three-place relation:

\(E\) represents a proposition about accepted evidence, \(H\) stands for a hypothesis, \(K\) for background knowledge and \(Pr\) for the agent’s subjective probability function. To confirm \(H\) is to raise its conditional probability, relative to \(K\). The shift from prior probability \(Pr(H \mid K)\) to posterior probability \(Pr(H \mid E \cdot K)\) is referred to as conditionalization on \(E\). The relation between these two probabilities is typically given by Bayes’ Theorem (setting aside more complex forms of conditionalization):

For Bayesians, here is the logical difficulty: it seems that an analogical argument cannot provide confirmation. In the first place, it is not clear that we can encapsulate the information contained in an analogical argument in a single proposition, \(E\). Second, even if we can formulate a proposition \(E\) that expresses that information, it is typically not appropriate to treat it as evidence because the information contained in \(E\) is already part of the background, \(K\). This means that \(E \cdot K\) is equivalent to \(K\), and hence \(Pr(H \mid E \cdot K) = Pr(H \mid K)\). According to the Bayesian definition, we don’t have confirmation. (This is a version of the problem of old evidence; see confirmation .) Third, and perhaps most important, analogical arguments are often applied to novel hypotheses \(H\) for which the prior probability \(Pr(H \mid K)\) is not even defined. Again, the definition of confirmation in terms of Bayesian conditionalization seems inapplicable.

If analogies don’t provide inductive support via ordinary conditionalization, is there an alternative? Here we face a second difficulty, once again most easily stated within a Bayesian framework. Van Fraassen (1989) has a well-known objection to any belief-updating rule other than conditionalization. This objection applies to any rule that allows us to boost credences when there is no new evidence. The criticism, made vivid by the tale of Bayesian Peter, is that these ‘ampliative’ rules are vulnerable to a Dutch Book . Adopting any such rule would lead us to acknowledge as fair a system of bets that foreseeably leads to certain loss. Any rule of this type for analogical reasoning appears to be vulnerable to van Fraassen’s objection.

There appear to be at least three routes to avoiding these difficulties and finding a role for analogical arguments within Bayesian epistemology. First, there is what we might call minimal Bayesianism . Within the Bayesian framework, some writers (Jeffreys 1973; Salmon 1967, 1990; Shimony 1970) have argued that a ‘seriously proposed’ hypothesis must have a sufficiently high prior probability to allow it to become preferred as the result of observation. Salmon has suggested that analogical reasoning is one of the most important means of showing that a hypothesis is ‘serious’ in this sense. If analogical reasoning is directed primarily towards prior probability assignments, it can provide inductive support while remaining formally distinct from confirmation, avoiding the logical difficulties noted above. This approach is minimally Bayesian because it provides nothing more than an entry point into the Bayesian apparatus, and it only applies to novel hypotheses. An orthodox Bayesian, such as de Finetti (de Finetti and Savage 1972, de Finetti 1974), might have no problem in allowing that analogies play this role.

The second approach is liberal Bayesianism : we can change our prior probabilities in a non-rule-based fashion . Something along these lines is needed if analogical arguments are supposed to shift opinion about an already existing hypothesis without any new evidence. This is common in fields such as archaeology, as part of a strategy that Wylie refers to as “mobilizing old data as new evidence” (Wylie and Chapman 2016: 95). As Hawthorne (2012) notes, some Bayesians simply accept that both initial assignments and ongoing revision of prior probabilities (based on plausibility arguments) can be rational, but

the logic of Bayesian induction (as described here) has nothing to say about what values the prior plausibility assessments for hypotheses should have; and it places no restrictions on how they might change.

In other words, by not stating any rules for this type of probability revision, we avoid the difficulties noted by van Fraassen. This approach admits analogical reasoning into the Bayesian tent, but acknowledges a dark corner of the tent in which rationality operates without any clear rules.

Recently, a third approach has attracted interest: analogue confirmation or confirmation via analogue simulation . As described in (Dardashti et al. 2017), the idea is as follows:

Our key idea is that, in certain circumstances, predictions concerning inaccessible phenomena can be confirmed via an analogue simulation in a different system. (57)

Dardashti and his co-authors concentrate on a particular example ( Example 17 ): ‘dumb holes’ and other analogues to gravitational black holes (Unruh 1981; Unruh 2008). Unlike real black holes, some of these analogues can be (and indeed have been) implemented and studied in the lab. Given the exact formal analogy between our models for these systems and our models of black holes, and certain important additional assumptions, Dardashti et al. make the controversial claim that observations made about the analogues provide evidence about actual black holes. For instance, the observation of phenomena analogous to Hawking radiation in the analogue systems would provide confirmation for the existence of Hawking radiation in black holes. In a second paper (Dardashti et al. 2018, Other Internet Resources), the case for confirmation is developed within a Bayesian framework.

The appeal of a clearly articulated mechanism for analogue confirmation is obvious. It would provide a tool for exploring confirmation of inaccessible phenomena not just in cosmology, but also in historical sciences such as archaeology and evolutionary biology, and in areas of medical science where ethical constraints rule out experiments on human subjects. Furthermore, as noted by Dardashti et al., analogue confirmation relies on new evidence obtained from the analogue system, and is therefore not vulnerable to the logical difficulties noted above.

Although the concept of analogue confirmation is not entirely new (think of animal testing, as in Example 8 ), the claims of (Dardashti et al. 2017, 2018 [Other Internet Resources]) require evaluation. One immediate difficulty for the black hole example: if we think in terms of ordinary analogical arguments, there is no positive analogy because, to put it simply, we have no basis of known similarities between a ‘dumb hole’ and a black hole. As Crowther et al. (2018, Other Internet Resources) argue, “it is not known if the particular modelling framework used in the derivation of Hawking radiation actually describes black holes in the first place. ” This may not concern Dardashti et al., since they claim that analogue confirmation is distinct from ordinary analogical arguments. It may turn out that analogue confirmation is different for cases such as animal testing, where we have a basis of known similarities, and for cases where our only access to the target domain is via a theoretical model.

In §3.6 , we saw that practice-based studies of analogy provide insight into the criteria for evaluating analogical arguments. Such studies also point to dynamical or programmatic roles for analogies, which appear to require evaluative frameworks that go beyond those developed for analogical arguments.

Knuttila and Loettgers (2014) examine the role of analogical reasoning in synthetic biology, an interdisciplinary field that draws on physics, chemistry, biology, engineering and computational science. The main role for analogies in this field is not the construction of individual analogical arguments but rather the development of concepts such as “noise” and “feedback loops”. Such concepts undergo constant refinement, guided by both positive and negative analogies to their analogues in engineered and physical systems. Analogical reasoning here is “transient, heterogeneous, and programmatic” (87). Negative analogies, seen as problematic obstacles for individual analogical arguments, take on a prominent and constructive role when the focus is theoretical construction and concept refinement.

Similar observations apply to analogical reasoning in its application to another cutting-edge field: emergent gravity. In this area of physics, distinct theoretical approaches portray gravity as emerging from different microstructures (Linneman and Visser 2018). “Novel and robust” features not present at the micro-level emerge in the gravitational theory. Analogies with other emergent phenomena, such as hydrodynamics and thermodynamics, are exploited to shape these proposals. As with synthetic biology, analogical reasoning is not directed primarily towards the formulation and assessment of individual arguments. Rather, its role is to develop different theoretical models of gravity.

These studies explore fluid and creative applications of analogy to shape concepts on the front lines of scientific research. An adequate analysis would certainly take us beyond the analysis of individual analogical arguments, which have been the focus of our attention. Knuttila and Loettgers (2014) are led to reject the idea that the individual analogical argument is the “primary unit” in analogical reasoning, but this is a debatable conclusion. Linneman and Visser (2018), for instance, explicitly affirm the importance of assessing the case for different gravitational models through “exemplary analogical arguments”:

We have taken up the challenge of making explicit arguments in favour of an emergent gravity paradigm… That arguments can only be plausibility arguments at the heuristic level does not mean that they are immune to scrutiny and critical assessment tout court. The philosopher of physics’ job in the process of discovery of quantum gravity… should amount to providing exactly this kind of assessments. (Linneman and Visser 2018: 12)

Accordingly, Linneman and Visser formulate explicit analogical arguments for each model of emergent gravity, and assess them using familiar criteria for evaluating individual analogical arguments. Arguably, even the most ambitious heuristic objectives still depend upon considerations of plausibility that benefit by being expressed, and examined, in terms of analogical arguments.

  • Achinstein, P., 1964, “Models, Analogies and Theories,” Philosophy of Science , 31: 328–349.
  • Agassi, J., 1964, “Discussion: Analogies as Generalizations,” Philosophy of Science , 31: 351–356.
  • –––, 1988, “Analogies Hard and Soft,” in D.H. Helman (ed.) 1988, 401–19.
  • Aristotle, 1984, The Complete Works of Aristotle , J. Barnes (ed.), Princeton: Princeton University Press.
  • Ashley, K.D., 1990, Modeling Legal Argument: Reasoning with Cases and Hypotheticals , Cambridge: MIT Press/Bradford Books.
  • Bailer-Jones, D., 2002, “Models, Metaphors and Analogies,” in Blackwell Guide to the Philosophy of Science , P. Machamer and M. Silberstein (eds.), 108–127, Cambridge: Blackwell.
  • Bartha, P., 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press.
  • Bermejo-Luque, L., 2012, “A unitary schema for arguments by analogy,” Informal Logic , 11(3): 161–172.
  • Biela, A., 1991, Analogy in Science , Frankfurt: Peter Lang.
  • Black, M., 1962, Models and Metaphors , Ithaca: Cornell University Press.
  • Campbell, N.R., 1920, Physics: The Elements , Cambridge: Cambridge University Press.
  • –––, 1957, Foundations of Science , New York: Dover.
  • Carbonell, J.G., 1983, “Learning by Analogy: Formulating and Generalizing Plans from Past Experience,” in Machine Learning: An Artificial Intelligence Approach , vol. 1 , R. Michalski, J. Carbonell and T. Mitchell (eds.), 137–162, Palo Alto: Tioga.
  • –––, 1986, “Derivational Analogy: A Theory of Reconstructive Problem Solving and Expertise Acquisition,” in Machine Learning: An Artificial Intelligence Approach, vol. 2 , J. Carbonell, R. Michalski, and T. Mitchell (eds.), 371–392, Los Altos: Morgan Kaufmann.
  • Carnap, R., 1980, “A Basic System of Inductive Logic Part II,” in Studies in Inductive Logic and Probability, vol. 2 , R.C. Jeffrey (ed.), 7–155, Berkeley: University of California Press.
  • Cartwright, N., 1992, “Aristotelian Natures and the Modern Experimental Method,” in Inference, Explanation, and Other Frustrations , J. Earman (ed.), Berkeley: University of California Press.
  • Christensen, D., 1999, “Measuring Confirmation,” Journal of Philosophy 96(9): 437–61.
  • Cohen, L. J., 1980, “Some Historical Remarks on the Baconian Conception of Probability,” Journal of the History of Ideas 41: 219–231.
  • Copi, I., 1961, Introduction to Logic, 2nd edition , New York: Macmillan.
  • Copi, I. and C. Cohen, 2005, Introduction to Logic, 12 th edition , Upper Saddle River, New Jersey: Prentice-Hall.
  • Cross, R. and J.W. Harris, 1991, Precedent in English Law, 4 th ed. , Oxford: Clarendon Press.
  • Currie, A., 2013, “Convergence as Evidence,” British Journal for the Philosophy of Science , 64: 763–86.
  • –––, 2016, “Ethnographic analogy, the comparative method, and archaeological special pleading,” Studies in History and Philosophy of Science , 55: 84–94.
  • –––, 2018, Rock, Bone and Ruin , Cambridge, MA: MIT Press.
  • Dardashti, R., K. Thébault, and E. Winsberg, 2017, “Confirmation via Analogue Simulation: What Dumb Holes Could Tell Us about Gravity,” British Journal for the Philosophy of Science , 68: 55–89.
  • Darwin, C., 1903, More Letters of Charles Darwin, vol. I , F. Darwin (ed.), New York: D. Appleton.
  • Davies, T.R., 1988, “Determination, Uniformity, and Relevance: Normative Criteria for Generalization and Reasoning by Analogy,” in D.H. Helman (ed.) 1988, 227–50.
  • Davies, T.R. and S. Russell, 1987, “A Logical Approach to Reasoning by Analogy,” in IJCAI 87: Proceedings of the Tenth International Joint Conference on Artificial Intelligence , J. McDermott (ed.), 264–70, Los Altos, CA: Morgan Kaufmann.
  • De Finetti, B., 1974, Theory of Probability, vols. 1 and 2 , trans. A. Machí and A. Smith, New York: Wiley.
  • De Finetti, B. and L.J. Savage, 1972, “How to Choose the Initial Probabilities,” in B. de Finetti, Probability, Induction and Statistics , 143–146, New York: Wiley.
  • Descartes, R., 1637/1954, The Geometry of René Descartes , trans. D.E. Smith and M.L. Latham, New York: Dover.
  • Douven, I. and T. Williamson, 2006, “Generalizing the Lottery Paradox,” British Journal for the Philosophy of Science , 57: 755–779.
  • Eliasmith, C. and P. Thagard, 2001, “Integrating structure and meaning: a distributed model of analogical mapping,” Cognitive Science 25: 245–286.
  • Evans, T.G., 1968, “A Program for the Solution of Geometric-Analogy Intelligence-Test Questions,” in M.L. Minsky (ed.), 271–353, Semantic Information Processing , Cambridge: MIT Press.
  • Falkenhainer, B., K. Forbus, and D. Gentner, 1989/90, “The Structure-Mapping Engine: Algorithm and Examples,” Artificial Intelligence 41: 2–63.
  • Forbus, K, 2001, “Exploring Analogy in the Large,” in D. Gentner, K. Holyoak, and B. Kokinov (eds.) 2001, 23–58.
  • Forbus, K., R. Ferguson, and D. Gentner, 1994, “Incremental Structure-mapping,” in Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society , A. Ram and K. Eiselt (eds.), 313–18, Hillsdale, NJ: Lawrence Erlbaum.
  • Forbus, K., C. Riesbeck, L. Birnbaum, K. Livingston, A. Sharma, and L. Ureel, 2007, “A prototype system that learns by reading simplified texts,” in AAAI Spring Symposium on Machine Reading , Stanford University, California.
  • Forbus, K., J. Usher, A. Lovett, K. Lockwood, and J. Wetzel, 2008, “Cogsketch: Open domain sketch understanding for cognitive science research and for education,” in Proceedings of the Fifth Eurographics Workshop on Sketch-Based Interfaces and Modeling , Annecy, France.
  • Forbus, K., R. Ferguson, A. Lovett, and D. Gentner, 2017, “Extending SME to Handle Large-Scale Cognitive Modeling,” Cognitive Science , 41(5): 1152–1201.
  • Franklin, B., 1941, Benjamin Franklin’s Experiments , I.B. Cohen (ed.), Cambridge: Harvard University Press.
  • Fraser, D., forthcoming, “The development of renormalization group methods for particle physics: Formal analogies between classical statistical mechanics and quantum field theory,” Synthese , first online 29 June 2018. doi:10.1007/s11229-018-1862-0
  • Galilei, G., 1610 [1983], The Starry Messenger , S. Drake (trans.) in Telescopes, Tides and Tactics , Chicago: University of Chicago Press.
  • Gentner, D., 1983, “Structure-Mapping: A Theoretical Framework for Analogy,” Cognitive Science 7: 155–70.
  • Gentner, D., K. Holyoak, and B. Kokinov (eds.), 2001, The Analogical Mind: Perspectives from Cognitive Science , Cambridge: MIT Press.
  • Gildenhuys, P., 2004, “Darwin, Herschel, and the role of analogy in Darwin’s Origin,” Studies in the History and Philosophy of Biological and Biomedical Sciences , 35: 593–611.
  • Gould, R.A. and P.J. Watson, 1982, “A Dialogue on the Meaning and Use of Analogy in Ethnoarchaeological Reasoning,” Journal of Anthropological Archaeology 1: 355–381.
  • Govier, T., 1999, The Philosophy of Argument , Newport News, VA: Vale Press.
  • Guarini, M., 2004, “A Defence of Non-deductive Reconstructions of Analogical Arguments,” Informal Logic , 24(2): 153–168.
  • Hadamard, J., 1949, An Essay on the Psychology of Invention in the Mathematical Field , Princeton: Princeton University Press.
  • Hájek, A., 2018, “Creating heuristics for philosophical creativity,” in Creativity and Philosophy , B. Gaut and M. Kieran (eds.), New York: Routledge, 292–312.
  • Halpern, J. Y., 2003, Reasoning About Uncertainty , Cambridge, MA: MIT Press.
  • Harrod, R.F., 1956, Foundations of Inductive Logic , London: Macmillan.
  • Hawthorne, J., 2012, “Inductive Logic”, The Stanford Encyclopedia of Philosophy (Winter 2012 edition), Edward N. Zalta (ed.), URL= < https://plato.stanford.edu/archives/win2012/entries/logic-inductive/ >.
  • Helman, D.H. (ed.), 1988, Analogical Reasoning: perspectives of artificial intelligence, cognitive science, and philosophy , Dordrecht: Kluwer Academic Publishers.
  • Hempel, C.G., 1965, “Aspects of Scientific Explanation,” in Aspects of Scientific Explanation and Other Essays in the Philosophy of Science , 331–496, New York: Free Press.
  • Hesse, M.B., 1964, “Analogy and Confirmation Theory,” Philosophy of Science , 31: 319–327.
  • –––, 1966, Models and Analogies in Science , Notre Dame: University of Notre Dame Press.
  • –––, 1973, “Logic of discovery in Maxwell’s electromagnetic theory,” in Foundations of scientific method: the nineteenth century , R. Giere and R. Westfall (eds.), 86–114, Bloomington: University of Indiana Press.
  • –––, 1974, The Structure of Scientific Inference , Berkeley: University of California Press.
  • –––, 1988, “Theories, Family Resemblances and Analogy,” in D.H. Helman (ed.) 1988, 317–40.
  • Hofstadter, D., 1995, Fluid Concepts and Creative Analogies , New York: BasicBooks (Harper Collins).
  • –––, 2001, “Epilogue: Analogy as the Core of Cognition,” in Gentner, Holyoak, and Kokinov (eds.) 2001, 499–538.
  • Hofstadter, D., and E. Sander, 2013, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking , New York: Basic Books.
  • Holyoak, K. and P. Thagard, 1989, “Analogical Mapping by Constraint Satisfaction,” Cognitive Science , 13: 295–355.
  • –––, 1995, Mental Leaps: Analogy in Creative Thought , Cambridge: MIT Press.
  • Huber, F., 2009, “Belief and Degrees of Belief,” in F. Huber and C. Schmidt-Petri (eds.) 2009, 1–33.
  • Huber, F. and C. Schmidt-Petri, 2009, Degrees of Belief , Springer, 2009,
  • Hume, D. 1779/1947, Dialogues Concerning Natural Religion , Indianapolis: Bobbs-Merrill.
  • Hummel, J. and K. Holyoak, 1997, “Distributed Representations of Structure: A Theory of Analogical Access and Mapping,” Psychological Review 104(3): 427–466.
  • –––, 2003, “A symbolic-connectionist theory of relational inference and generalization,” Psychological Review 110: 220–264.
  • Hunter, D. and P. Whitten (eds.), 1976, Encyclopedia of Anthropology , New York: Harper & Row.
  • Huygens, C., 1690/1962, Treatise on Light , trans. S. Thompson, New York: Dover.
  • Indurkhya, B., 1992, Metaphor and Cognition , Dordrecht: Kluwer Academic Publishers.
  • Jeffreys, H., 1973, Scientific Inference, 3rd ed. , Cambridge: Cambridge University Press.
  • Keynes, J.M., 1921, A Treatise on Probability , London: Macmillan.
  • Knuuttila, T., and A. Loettgers, 2014, “Varieties of noise: Analogical reasoning in synthetic biology,” Studies in History and Philosophy of Science , 48: 76–88.
  • Kokinov, B., K. Holyoak, and D. Gentner (eds.), 2009, New Frontiers in Analogy Research : Proceedings of the Second International Conference on Analogy ANALOGY-2009 , Sofia: New Bulgarian University Press.
  • Kraus, M., 2015, “Arguments by Analogy (and What We Can Learn about Them from Aristotle),” in Reflections on Theoretical Issues in Argumentation Theory , F.H. van Eemeren and B. Garssen (eds.), Cham: Springer, 171–182. doi: 10.1007/978-3-319-21103-9_13
  • Kroes, P., 1989, “Structural analogies between physical systems,” British Journal for the Philosophy of Science , 40: 145–54.
  • Kuhn, T.S., 1996, The Structure of Scientific Revolutions , 3 rd edition, Chicago: University of Chicago Press.
  • Kuipers, T., 1988, “Inductive Analogy by Similarity and Proximity,” in D.H. Helman (ed.) 1988, 299–313.
  • Lakoff, G. and M. Johnson, 1980, Metaphors We Live By , Chicago: University of Chicago Press.
  • Leatherdale, W.H., 1974, The Role of Analogy, Model, and Metaphor in Science , Amsterdam: North-Holland Publishing.
  • Lee, H.S. and Holyoak, K.J., 2008, “Absence Makes the Thought Grow Stronger: Reducing Structural Overlap Can Increase Inductive Strength,” in Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society , V. Sloutsky, B. Love, and K. McRae (eds.), 297–302, Austin: Cognitive Science Society.
  • Lembeck, F., 1989, Scientific Alternatives to Animal Experiments , Chichester: Ellis Horwood.
  • Levi, E., 1949, An Introduction to Legal Reasoning , Chicago: University of Chicago Press.
  • Linnemann, N., and M. Visser, 2018, “Hints towards the emergent nature of gravity,” Studies in History and Philosophy of Modern Physics , 30: 1–13.
  • Liston, M., 2000, “Critical Discussion of Mark Steiner’s The Applicability of Mathematics as a Philosophical Problem,” Philosophia Mathematica , 3(8): 190–207.
  • Llewellyn, K., 1960, The Bramble Bush: On Our Law and its Study , New York: Oceana.
  • Lloyd, G.E.R., 1966, Polarity and Analogy , Cambridge: Cambridge University Press.
  • Macagno, F., D. Walton and C. Tindale, 2017, “Analogical Arguments: Inferential Structures and Defeasibility Conditions,” Argumentation , 31: 221–243.
  • Maher, P., 2000, “Probabilities for Two Properties,” Erkenntnis , 52: 63–91.
  • Maier, C.L., 1981, The Role of Spectroscopy in the Acceptance of the Internally Structured Atom 1860–1920 , New York: Arno Press.
  • Maxwell, J.C., 1890, Scientific Papers of James Clerk Maxwell, Vol. I , W.D. Niven (ed.), Cambridge: Cambridge University Press.
  • McKay, C.P., 1993, “Did Mars once have Martians?” Astronomy , 21(9): 26–33.
  • McMullin, Ernan, 1993, “Rationality and Paradigm Change in Science,” in World Changes: Thomas Kuhn and the Nature of Science , P. Horwich (ed.), 55–78, Cambridge: MIT Press.
  • Mill, J.S., 1843/1930, A System of Logic , London: Longmans-Green.
  • Mitchell, M., 1993, Analogy-Making as Perception , Cambridge: Bradford Books/MIT Press.
  • Moore, B. N. and R. Parker, 1998, Critical Thinking, 5th ed. , Mountain View, CA: Mayfield.
  • Nersessian, N., 2002, “Maxwell and ‘the Method of Physical Analogy’: Model-Based Reasoning, Generic Abstraction, and Conceptual Change,” in Reading Natural Philosophy , D. Malament (ed.), Chicago: Open Court.
  • –––, 2009, “Conceptual Change: Creativity, Cognition, and Culture,” in Models of Discovery and Creativity , J. Meheus and T. Nickles (eds.), Dordrecht: Springer 127–166.
  • Niiniluoto, I., 1988, “Analogy and Similarity in Scientific Reasoning,” in D.H. Helman (ed.) 1988, 271–98.
  • Norton, J., 2010, “There Are No Universal Rules for Induction,” Philosophy of Science , 77: 765–777.
  • Ortony, A. (ed.), 1979, Metaphor and Thought , Cambridge: Cambridge University Press.
  • Oppenheimer, R., 1955, “Analogy in Science,” American Psychologist 11(3): 127–135.
  • Pietarinen, J., 1972, Lawlikeness, Analogy and Inductive Logic , Amsterdam: North-Holland.
  • Poincaré, H., 1952a, Science and Hypothesis , trans. W.J. Greenstreet, New York: Dover.
  • –––, 1952b, Science and Method , trans. F. Maitland, New York: Dover.
  • Polya, G., 1954, Mathematics and Plausible Reasoning , 2 nd ed. 1968, two vols., Princeton: Princeton University Press.
  • Prieditis, A. (ed.), 1988, Analogica , London: Pitman.
  • Priestley, J., 1769, 1775/1966, The History and Present State of Electricity, Vols. I and II , New York: Johnson. Reprint.
  • Quine, W.V., 1969, “Natural Kinds,” in Ontological Relativity and Other Essays , 114–138, New York: Columbia University Press.
  • Quine, W.V. and J.S. Ullian, 1970, The Web of Belief , New York: Random House.
  • Radin, M., 1933, “Case Law and Stare Decisis ,” Columbia Law Review 33 (February), 199.
  • Reid, T., 1785/1895, Essays on the Intellectual Powers of Man . The Works of Thomas Reid, vol. 3, 8 th ed. , Sir William Hamilton (ed.), Edinburgh: James Thin.
  • Reiss, J., 2015, “A Pragmatist Theory of Evidence,” Philosophy of Science , 82: 341–62.
  • Reynolds, A.K. and L.O. Randall, 1975, Morphine and Related Drugs , Toronto: University of Toronto Press.
  • Richards, R.A., 1997, “Darwin and the inefficacy of artificial selection,” Studies in History and Philosophy of Science , 28(1): 75–97.
  • Robinson, D.S., 1930, The Principles of Reasoning, 2nd ed ., New York: D. Appleton.
  • Romeijn, J.W., 2006, “Analogical Predictions for Explicit Similarity,” Erkenntnis , 64(2): 253–80.
  • Russell, S., 1986, Analogical and Inductive Reasoning , Ph.D. thesis, Department of Computer Science, Stanford University, Stanford, CA.
  • –––, 1988, “Analogy by Similarity,” in D.H. Helman (ed.) 1988, 251–269.
  • Salmon, W., 1967, The Foundations of Scientific Inference , Pittsburgh: University of Pittsburgh Press.
  • –––, 1990, “Rationality and Objectivity in Science, or Tom Kuhn Meets Tom Bayes,” in Scientific Theories (Minnesota Studies in the Philosophy of Science: Volume 14), C. Wade Savage (ed.), Minneapolis: University of Minnesota Press, 175–204.
  • Sanders, K., 1991, “Representing and Reasoning about Open-Textured Predicates,” in Proceedings of the Third International Conference on Artificial Intelligence and Law , New York: Association of Computing Machinery, 137–144.
  • Schlimm, D., 2008, “Two Ways of Analogy: Extending the Study of Analogies to Mathematical Domains,” Philosophy of Science , 75: 178–200.
  • Shelley, C., 1999, “Multiple Analogies in Archaeology,” Philosophy of Science , 66: 579–605.
  • –––, 2003, Multiple Analogies in Science and Philosophy , Amsterdam: John Benjamins.
  • Shimony, A., 1970, “Scientific Inference,” in The Nature and Function of Scientific Theories , R. Colodny (ed.), Pittsburgh: University of Pittsburgh Press, 79–172.
  • Snyder, L., 2006, Reforming Philosophy: A Victorian Debate on Science and Society , Chicago: University of Chicago Press.
  • Spohn, W., 2009, “A Survey of Ranking Theory,” in F. Huber and C. Schmidt-Petri (eds.) 2009, 185-228.
  • –––, 2012, The Laws of Belief: Ranking Theory and its Philosophical Applications , Oxford: Oxford University Press.
  • Stebbing, L.S., 1933, A Modern Introduction to Logic, 2nd edition , London: Methuen.
  • Steiner, M., 1989, “The Application of Mathematics to Natural Science,” Journal of Philosophy , 86: 449–480.
  • –––, 1998, The Applicability of Mathematics as a Philosophical Problem , Cambridge, MA: Harvard University Press.
  • Stepan, N., 1996, “Race and Gender: The Role of Analogy in Science,” in Feminism and Science , E.G. Keller and H. Longino (eds.), Oxford: Oxford University Press, 121–136.
  • Sterrett, S., 2006, “Models of Machines and Models of Phenomena,” International Studies in the Philosophy of Science , 20(March): 69–80.
  • Sunstein, C., 1993, “On Analogical Reasoning,” Harvard Law Review , 106: 741–791.
  • Thagard, P., 1989, “Explanatory Coherence,” Behavioral and Brain Science , 12: 435–502.
  • Timoshenko, S. and J. Goodier, 1970, Theory of Elasticity , 3rd edition, New York: McGraw-Hill.
  • Toulmin, S., 1958, The Uses of Argument , Cambridge: Cambridge University Press.
  • Turney, P., 2008, “The Latent Relation Mapping Engine: Algorithm and Experiments,” Journal of Artificial Intelligence Research , 33: 615–55.
  • Unruh, W., 1981, “Experimental Black-Hole Evaporation?,” Physical Review Letters , 46: 1351–3.
  • –––, 2008, “Dumb Holes: Analogues for Black Holes,” Philosophical Transactions of the Royal Society A , 366: 2905–13.
  • Van Fraassen, Bas, 1980, The Scientific Image , Oxford: Clarendon Press.
  • –––, 1984, “Belief and the Will,” Journal of Philosophy , 81: 235–256.
  • –––, 1989, Laws and Symmetry , Oxford: Clarendon Press.
  • –––, 1995, “Belief and the Problem of Ulysses and the Sirens,” Philosophical Studies , 77: 7–37.
  • Waller, B., 2001, “Classifying and analyzing analogies,” Informal Logic , 21(3): 199–218.
  • Walton, D. and C. Hyra, 2018, “Analogical Arguments in Persuasive and Deliberative Contexts,” Informal Logic , 38(2): 213–261.
  • Weitzenfeld, J.S., 1984, “Valid Reasoning by Analogy,” Philosophy of Science , 51: 137–49.
  • Woods, J., A. Irvine, and D. Walton, 2004, Argument: Critical Thinking, Logic and the Fallacies , 2 nd edition, Toronto: Prentice-Hall.
  • Wylie, A., 1982, “An Analogy by Any Other Name Is Just as Analogical,” Journal of Anthropological Archaeology , 1: 382–401.
  • –––, 1985, “The Reaction Against Analogy,” Advances in Archaeological Method and Theory , 8: 63–111.
  • Wylie, A., and R. Chapman, 2016, Evidential Reasoning in Archaeology , Bloomsbury Academic.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • Crowther, K., N. Linnemann, and C. Wüthrich, 2018, “ What we cannot learn from analogue experiments ,” online at arXiv.org.
  • Dardashti, R., S. Hartmann, K. Thébault, and E. Winsberg, 2018, “ Hawking Radiation and Analogue Experiments: A Bayesian Analysis ,” online at PhilSci Archive.
  • Norton, J., 2018. “ Analogy ”, unpublished draft, University of Pittsburgh.
  • Resources for Research on Analogy: a Multi-Disciplinary Guide (University of Windsor)
  • UCLA Reasoning Lab (UCLA)
  • Dedre Gentner’s publications (Northwestern University)
  • The Center for Research on Concepts and Cognition (Indiana University)

abduction | analogy: medieval theories of | argument and argumentation | Bayes’ Theorem | confirmation | epistemology: Bayesian | evidence | legal reasoning: precedent and analogy in | logic: inductive | metaphor | models in science | probability, interpretations of | scientific discovery

Copyright © 2019 by Paul Bartha < paul . bartha @ ubc . ca >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

analogical problem solving definition

  • > The Psychology of Problem Solving
  • > Analogical Transfer in Problem Solving

analogical problem solving definition

Book contents

  • Frontmatter
  • Contributors
  • PART I INTRODUCTION
  • PART II RELEVANT ABILITIES AND SKILLS
  • PART III STATES AND STRATEGIES
  • 8 Motivating Self-Regulated Problem Solvers
  • 9 Feeling and Thinking: Implications for Problem Solving
  • 10 The Fundamental Computational Biases of Human Cognition: Heuristics That (Sometimes) Impair Decision Making and Problem Solving
  • 11 Analogical Transfer in Problem Solving
  • PART IV CONCLUSION AND INTEGRATION

11 - Analogical Transfer in Problem Solving

Published online by Cambridge University Press:  05 June 2012

When people encounter a novel problem, they might be reminded of a problem they solved previously, retrieve its solution, and use it, possibly with some adaptation, to solve the novel problem. This sequence of events, or “problem-solving transfer,” has important cognitive benefits: It saves the effort needed for derivation of new solutions and may allow people to solve problems they wouldn't know to solve otherwise. Of course, the cognitive benefits of problem-solving transfer are limited to the case in which people retrieve and apply a solution to an analogous problem that, indeed, can help them solve the novel problem ( positive transfer ). But people might be also reminded of and attempt to transfer a solution to a nonanalogous problem ( negative transfer ) and thereby waste their cognitive resources or arrive at an erroneous solution. The challenge facing researchers and educators is to identify the conditions that promote positive and deter negative problem-solving transfer. In this chapter, I describe how researchers who study analogical transfer address this challenge. Specifically, I describe work that examined how problem similarity affects transfer performance and how people determine whether the learned and the novel problems are similar. Throughout the chapter I highlight the relevance of research on analogical transfer to instructional contexts and illustrate the main concepts and findings with examples of mathematical word problems.

SIMILARITY IN SURFACE AND STRUCTURE

As in every other case of learning generalization, the main variable that mediates problem-solving transfer is the degree of similarity between the learned ( base ) and novel ( target ) problems.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • Analogical Transfer in Problem Solving
  • By Miriam Bassok , University of Washington
  • Edited by Janet E. Davidson , Lewis and Clark College, Portland , Robert J. Sternberg , Yale University, Connecticut
  • Book: The Psychology of Problem Solving
  • Online publication: 05 June 2012
  • Chapter DOI: https://doi.org/10.1017/CBO9780511615771.012

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

Understanding the What and When of Analogical Reasoning Across Analogy Formats: An Eye‐Tracking and Machine Learning Approach

Jean‐pierre thibaut.

1 University of Bourgogne, Dijon France

Yannick Glady

Robert m. french.

Starting with the hypothesis that analogical reasoning consists of a search of semantic space, we used eye‐tracking to study the time course of information integration in adults in various formats of analogies. The two main questions we asked were whether adults would follow the same search strategies for different types of analogical problems and levels of complexity and how they would adapt their search to the difficulty of the task. We compared these results to predictions from the literature. Machine learning techniques, in particular support vector machines (SVMs), processed the data to find out which sets of transitions best predicted the output of a trial (error or correct) or the type of analogy (simple or complex). Results revealed common search patterns, but with local adaptations to the specifics of each type of problem, both in terms of looking‐time durations and the number and types of saccades. In general, participants organized their search around source‐domain relations that they generalized to the target domain. However, somewhat surprisingly, over the course of the entire trial, their search included, not only semantically related distractors, but also unrelated distractors, depending on the difficulty of the trial. An SVM analysis revealed which types of transitions are able to discriminate between analogy tasks. We discuss these results in light of existing models of analogical reasoning.

1. Introduction

Analogical reasoning is typically conceived of as a process in which a base domain and a target domain are compared in order to find relational correspondences between them (e.g., Gentner, Holyoak, & Kokinov, 2001 ; Holyoak, 2012 ). Analogies play a central role in many activities and, as such, have been the focus of numerous studies over the years (e.g., Hofstadter & Sander, 2013 ; Holyoak, 2012 ; Krawczyk, 2017 ).

Understanding an analogy is a multifaceted task requiring systematic comparisons between the items in both domains of the analogy problem. Most conceptions of analogical reasoning include the following processes: (1) encoding the items making up the problem; (2) search and retrieval of one relation in memory that connects the two terms, A and B, in the base domain (e.g., “ lives in ” for bird and nest ); (3) mapping of the hypothesized relation holding in the base domain to the target domain, between a C and a D (e.g., “dog” with “doghouse”); and (4) evaluation of the soundness of the mapping (e.g., can both pairs be unified by the “lives in” relation) (e.g., Chen, Honomichl, Kennedy, & Tan, 2016 ; French, 2002 ).

At the heart of most analogy models is the mapping process between the base and the target domains. Mapping is the process involved in finding a set of systematic correspondences between the source and the target domain. This means establishing that the relations holding between a subset of objects, events, and characters in the source domain also hold between a subset of objects, events, and characters in a target domain (Holyoak, 2012 ). Importantly, depending on the model of analogical reasoning, the emphasis will either be on the alignment between entities playing the same role in both domains (i.e., A with C, and B with D) or will involve a generalization of the relation discovered in the base domain and subsequently applied to the target domain. In the latter case, hypotheses must be made regarding which relation(s) in the base domain (i.e., A and B) can be applied to the other domain (i.e., C and D) (for reviews, see Gentner & Forbus, 2011 ; Gentner et al., 2001 ; Holyoak, 2012 ).

The present paper aims to study the temporal dynamics of base‐to‐target domain mapping by means of eye‐tracking, with analogies of various types and varying levels of difficulty. The central issue that we will explore is whether the temporal organization of mapping varies depending on the structure of the analogy task (i.e., scene analogies and proportional analogies), and/or according to characteristics of the domains being compared (e.g., analogy difficulty as measured by the semantic distance between domains or by the type and number of distractors).

1.1. Definition of the search space

The search space of an analogy can vary from well‐defined spaces in which the set of potential dimensions is limited, to much more open search spaces in the case of very different source and target domains and/or semantically weakly associated entities in both domains. The matrix completion task (Chen et al., 2016 ; Sternberg, 1977 ) is an example of the first. Participants view a matrix of items that share a particular relationship and then select a solution that completes the matrix in a way that is consistent with the relation between the objects in the matrix. Problem complexity is usually defined by a small number of dimensions that differ in saliency and by the number of transformations that have to be kept active in working memory (see Bethell‐Fox, Lohman, & Snow, 1984 ; Stevenson, Heiser, & Resing, 2013 ). Its difficulty involves integrating all of the dimensions into a single representation and distinguishing the correct solution from similar alternatives. By contrast, with semantic analogies, the difficulty is a matter of the conceptual distance between the base and target domains and of the association strength between the items involved in both domains. In addition, the targeted relations can be obvious or less obvious, sometimes necessitating semantic rerepresentation when the salience of the relationally best solution is low (e.g., Green, 2016 ). Finally, the presence of irrelevant dimensions, whether salient or not, also contributes to task difficulty (e.g., Thibaut, French, & Vezneva, 2010a ).

1.2. Computational analysis of the dynamics of analogy‐making

In solving an analogy problem what information should be processed and when? Most research in the field has dealt with interpretations of analogies, their soundness, factors influencing their comprehension, with or without reaction‐time data (RTs). Only a few studies have dealt directly with the temporal organization (i.e., the dynamics) of the search through semantic space required to solve an analogy problem (see below). However, computer models have proposed various ways in which the dynamics of solving analogy problems might occur. These computational models more or less explicitly posit a temporal organization of the search for a solution. We will compare four distinct proposals derived from computational models (see French, 2002 ; Gentner & Forbus, 2011 for reviews)––namely the “ alignment‐first ,” “ projection‐first ,” “parallel terraced scan,” and “ relational‐priming ” models.

In a predicate‐argument context, representations consist of predicates and their arguments. In this case, the predicate instantiates a relational structure in the base domain, such as revolves_around (earth, sun). In that case, one attempts to align the arguments in the base domain with arguments in the target domain. Gentner and Forbus ( 2011 ) call this an “Alignment‐first” approach (e.g., SME or ACME). This is derived from the structural alignment hypothesis (Falkenhainer, Forbus, & Gentner, 1989 ; Markman & Gentner, 1993 ), according to which the items that compose the base and the target domains are aligned first and inferences are projected from the base pair to the target pair. In the A:B::C:? T(arget) paradigm, this means that one would first align A with C, and would then look for a solution, T (or Ts) that is conceptually aligned with B. This predicts early attention to the A‐C pair and to the B‐T(arget) pairs.

By contrast, “Projection‐first” models (e.g., LISA, Hummel & Holyoak, 1997 ; DORA, Doumas, Hummel, & Sandhofer, 2008 ) begin by identifying a set of relations that might be relevant to unify the stimuli in the base pair. Once identified, they project this relation from the base pair (i.e., here, the A‐B pair) to the target (i.e., the C‐T(arget) pair). This predicts early attention to the A‐B pair, as participants study the pair for a relation between the two items. This would imply more early AB saccades (i.e., saccades between A and B) than saccades within the target domain or from the base to the target, followed by more attention to C and Target and C‐Target saccades. This contrasts distinctly with the alignment‐first case predicting more between‐domain AC and BT saccades.

The stochastic parallel‐terraced‐scan models (e.g., French, 1995 ; Mitchell, 1993 ) constitute the third class of models. These models make no a priori prediction about the order in which items will be aligned, but rather, dynamically discover relations between objects based on the evolving activation levels of the objects and potential relations between them in a semantic network. In this way, the model gradually converges on a coherent structure of the problem upon which an answer is based. Activations in the semantic network change dynamically according to what the program happens to have (stochastically) perceived up to that point. In the dog:doghouse::bird:nest, within and cross‐domain relations are discovered correct ( sleeps‐in ) or incorrect ( builds , for bird nest). When nothing matches between the two domains, temperature rises, and the search is extended beyond its normal bounds. Relations with no corresponding relation in the base domain lose activation and the activation of relational match sleeps‐in and its associated arguments increase with dynamics that can differ from those of the “alignment‐first” or “projection‐first” approaches.

Importantly, most conceptions of analogical reasoning, in particular the three mentioned above, involve in one way or another the idea of a selection or discovery of relations among a set of possible relations, allowing us to progressively converge on a solution, analogically correct or not. This is one of the central tenets of most models of analogy‐making. They converge on the idea of a lessening of activation for unrelated, local, matches in favor of an activation/construction of a relational structure, generally written in a predicate‐argument format.

A fourth view, the relational priming model, proposed by Leech, Mareschal, and Cooper ( 2008 ), gives no central role to mapping in the development of analogies. According to this model, children first study the A‐B pair. Then, the relation found between A and B (the authors appeal to the retrieval of a relevant transformation between A and B) (e.g., cuts for knife and bread ) automatically primes the retrieval of a relationship between C and the target item into which it is transformed. This model predicts that participants study the A‐B pair first, they then turn to C and the solution set, much like the projection‐first model. However, in contrast to the projection‐first model, the relational‐priming model's solution to the problem is found through priming , which involves no systematic comparisons between pairs of items in the target domain and mostly ignores distractors. The solution has been primed by the original relation found in the base domain and is directly applied to C. In other words, there is no notion of mapping, or even of an active search, in this model.

Finally, most models include an evaluation of the solution at the end of the process, determining to what extent the inferences discovered are relevant to the context at hand. The precise temporal organization of this evaluation process remains an open question. In principle, an evaluation would be associated with “checking” transitions by going between the target and base domains, in other words, checking the compatibility of their solution in the target domain with the relation they found in the base domain. The evaluation would generally also involve comparisons between the target and the semantic distractors. An open empirical question is whether these processes take place during the entire trial or at the end of the trial.

2. Eye‐tracking in analogical reasoning

2.1. the temporal dynamics of analogical reasoning.

Thibaut, French, and Vezneva ( 2010b ) characterized analogical reasoning as a search in a space that is dynamically constructed as comparisons proceed. By definition, analogical reasoning involves multiple sources of information and various comparisons within and between the domains making up the problem and their integration into a consistent relational structure. This means that perceptual or semantic similarities or local relations that, initially, seemed important might be discarded during the construction of a relational system that best unifies the two domains (see the systematicity constraint, Gentner, 1983 ). Eye‐tracking technology can allow us to identify differences between types of analogies, between levels of difficulty, whether a child or an adult is solving the problem (e.g., Thibaut & French, 2016 ), and even whether or not a correct answer to the problem will be given. This is largely because looking positions and times are highly correlated with the independently assessed informativeness of regions within a scene (e.g., Rayner, 2012 ).

The use of eye‐tracking techniques that we have developed (French, Glady, & Thibaut, 2017 ; Thibaut & French, 2016 ) and that are used to analyze the results from the two experiments reported in this paper will allow us to show that time‐course predictions based on the above computational models do not, in general, present a fully accurate picture of the dynamics of how analogy problems are actually solved.

2.1.1. Eye‐tracking contributions to analogical reasoning

Previous eye‐tracking studies can be compared along a number of dimensions: format of the analogies (semantic analogies or matrices), types of distractors used (semantically or perceptually related), global analyses on entire trials or slices of trials, age of participants (i.e., children or adults), level of problem difficulty, and a comparison of projection‐first, alignment‐first, and relational‐priming views of analogical reasoning. Bethell‐Fox et al. ( 1984 ) were among the first to investigate participants’ strategies based on eye movements. They studied Raven geometrical matrices and manipulated problem difficulty by modifying the number of transformations, the similarity to item C in the matrix, and the number of alternatives. They hypothesized that these factors would elicit alternative strategies. In this respect, their paper was able to distinguish between a constructive‐matching strategy, which is analogous to the projection‐first strategy (a first analysis of the matrix is followed by an exploration of the solutions), and a response‐elimination strategy that consisted of successive back and forth explorations between the matrix and each of the alternatives. They showed that the manipulated factors influenced participants’ strategies (see also Mulholland, Pellegrino, & Glaser, 1980 , in a true ‐ false judgment task). However, they did not address the temporal dynamics of solving the problems.

Using a similar matrix task in a developmental context, Chen et al. ( 2016 ) compared the strategies of 5‐ and 8‐year‐old children. Differences in performance between age groups resulted from not using optimal processing strategies. Children who employed optimal processing strategies early on were more likely to correctly solve subsequent problems in later phases. One interesting feature of the study is the authors’ equating types of transitions with specific processing strategies, such as item encoding, rule integration, and so on. However, these authors provide only global measures and nothing about a moment‐to‐moment analysis of the strategies (i.e., when encoding, or rule integration, takes place during solving). With adults, Hayes, Petrov, and Sederberg ( 2011 ) applied a novel scanpath analysis in order to capture statistical regularities in eye‐movement sequences in the Raven's Advanced Progressive Matrices. They identified two principal components predicting individual scores, the first being the row‐by‐row scan, and the second toggling toward the response area. The authors interpret the row‐by‐row strategy as a clue to constructive matching and toggles as a clue to response elimination. The authors did not analyze when these toggles took place but note that this can be done by contrasting the beginning and the end of the trial, an approach we follow. This is important, though, because toggles might also appear in a constructive matching strategy, when participants compare the options with the regularities they found in the base.

Gordon and Moser ( 2007 ) conducted an eye‐tracking study of analogical reasoning in adults using scenes from Richland, Morrison, and Holyoak ( 2006 ). Participants initially focused on the “actor‐patient” pair in the source scene (a dog chasing a cat, see Fig.  4a , below, top panel). They then looked for the solution in the target image (a second actor‐patient pair: a girl chasing a boy, Fig.  4b , below, lower panel). Significantly, the authors also studied saccades involving the distractors (i.e., saccades between the actor and the perceptually similar distractor in the target pair) and showed that these saccades also occurred after saccades toward relational matches. This suggests that participants did not systematically process object matches before relational matches, which, according to the authors, contradicts Rattermann and Gentner's ( 1998 ) claim “that object matches are generally computed before relational matches” (p. 471). However, the study did not provide a systematic analysis of the saccades between the source and the target scenes. It is thus difficult to test the projection‐first versus the alignment‐first hypotheses. Finally, eye‐movement analyses focused on data collected during a 10 s study period before the arrow pointing to a source object was introduced in the upper scene. Without the arrow, participants might have engaged in a less targeted search, identifying characters and stimuli, potential relations, but without the attentional weights that might be elicited by the targeted (arrow‐based) role.

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g006.jpg

(a): Simple scene analogy with a single salient relation in the base domain, in the case “a cat chasing a mouse.” (b). Complex scene analogy with two salient relations in the base domain: “a cat chasing a mouse” and “a dog chasing a cat,” where B is the targeted stimulus and T is the participants’ choice.

Thibaut and French ( 2016 ) used eye‐tracking to study the development of analogical reasoning, from 5 years of age to adulthood (5‐, 8‐, 13‐year‐olds and adults) with classical proportional analogies (A:B::C:? paradigm). In order to study the “temporal dynamics” of solving problems, they split each trial into three identical time slices and analyzed the distribution of gazes toward areas of interest (AOIs) and transitions between AOIs. One major result was the significant differences between age groups in the temporal distribution of gaze profiles. Crucially, adults seemed to follow mostly a projection‐first strategy, whereas children started by studying the C item and organized their search around it, which can be characterized as an “undirected” search strategy. At the onset of the trials, children had significantly fewer AB transitions than adults and more of other types of transitions. Adults also seemed to be more engaged in solution monitoring, since, by the end of the trial, they had more saccades from the solution set to the AB pair. Children's results were interpreted in terms of difficulty in inhibiting the main goal of the task, that is, “what goes with C,” so that they could focus on the AB pair (see Glady, French, & Thibaut, 2017 ). Overall, the results showed that adults first analyzed the AB pair and then applied that relation to the target domain. Thereafter, they checked their solution by continuing to look at the distractors and returning to the source domain. In sharp contrast, children tended to focus early on the C item, ignoring, or at least attaching less importance than adults to the AB pair. Both for adults and children, our data revealed little, or no, evidence of AC and/or BT alignments, the key to alignment‐first models. This lack of AC and BT saccades has been confirmed by Vendetti, Starr, Johnson, Modavi, and Bunge ( 2017 ) in their work on similar proportional analogies (see also Starr, Vendetti, & Bunge, 2018 ). The latter authors contrasted three strategies, including the projection first and the alignment first. Each was implemented by a different algorithm essentially based on a different subset of early gazes and transitions. The authors followed a “winner‐take all approach in which a trial was classified as a particular strategy” (Vendetti et al., 2017 , p. 4) depending on its score. In both papers, Starr and colleagues’ results favored the projection‐first strategy. Hence, by construction, only early gazes were considered, not the temporal dynamics of the entire trial.

In short, eye‐tracking data show that adult participants generally favor a mostly projection‐first approach. However, with the exception of Gordon and Moser ( 2007 ) and Thibaut and French ( 2016 ), most studies do not consider the temporal organization of the trial, which we will try to capture with analyses of the beginning, the middle, and the end of a trial (see results, below).

3. Goals and predictions

Our two experiments tackle one main question––namely, how participants organize their search as a function of analogy characteristics. We compare search‐strategy adaptations as a function of analogy type (i.e., scene analogies and proportional analogies) and problem difficulty (i.e., a greater number of potential relations between items, weaker semantic relations between items, etc.).

One way to study the effects of the difficulty of semantic analogies in a well‐controlled setting is to refer to semantic distance. The semantic distance between domains has been of particular interest in analogical reasoning research. For example, Vendetti et al. ( 2012 ) assessed participants’ validity judgments of A:B::C:D analogies with near‐domain (A and C, and B and D, belonging to the same domain) and far‐domain analogies (the source and target domains involving different conceptual domains). They described a general decrease in correct responses in far‐domain analogies. Semantic distance has also been shown to affect the activation of brain areas and the dynamics of brain processes during analogical judgment (Green, Kraemer, Fugelsang, Gray, & Dunbar, 2010 ; Kmiecik, Brisson, & Morrison, 2019 ).

3.1. Exploring the temporal dynamics of analogy‐making with semantic analogies

Former eye‐tracking studies with analogies had limitations in the sense that they considered only one type of analogy, most of them matrices based on a set of well‐defined dimensions. By construction, matrices display logical progressions across stimuli and heavily rely on the identification of clear, differentially salient, dimensions. Semantic analogies, by contrast, are defined along semantic spaces that cannot a priori be described with a finite set of dimensions: an infinite number of descriptors can be applied to any situation, surface semantic features, or highly sophisticated interpretations (Hofstadter & Sander, 2013 ; Murphy & Medin, 1985 ). Previous studies have not referred to clear moments of a trial with semantic analogies in which difficulty was a key variable in the task (French et al., 2017 ; Gordon & Moser, 2007 ; Thibaut & French, 2016 ).

Most eye‐tracking data are compatible with a projection‐first hypothesis, showing that the base domain is studied first and the results of this exploration are then applied to the target domain. Our paper considers various task formats in different analogical mapping formats, specifically standard proportional analogies and scene analogies.We analyze the time of course of comparisons taking place until a decision is made, and how distractors are processed and rejected as a function of analogy difficulty.

We will manipulate difficulty in several ways, and use techniques we have developed (French et al., 2017 ) to analyze the predictive and classification power of sets of transitions between AOIs. The idea is to use machine learning classification algorithms to study the predictive dimensions of participants’ scanpaths, aiming at identifying “transition profiles” (i.e., subsets of saccade types) that might distinguish complex trials from simple ones, correct trials from errors, or predict the output of a trial (correct or error) in the first third of the trial.

3.2. Task specificity

Do participants adapt their search strategy to the specifics of the analogy tasks? Given the preeminence of the projection‐first strategy in existing studies, our main hypothesis was that the projection‐first strategy is driven, at least in part, by the particular analogy format, with the layout of proportional analogies implicitly encouraging participants to first seek a unifying relation that would drive the search for the compatible target item. By comparison, scene analogies provide participants with structured scenes with oriented relations in which the identification of equivalent items becomes the central issue. In terms of a predicate‐argument structure, (e.g., chase (cat, mouse)), the participants’ task is to find the first argument of the predicate, chase , in the target image that plays the same role as the first argument of chase in the source image. Complex (difficult) analogy problems, in comparison with simpler ones, require more evaluation processes, which is often thought of as an alignment of predicates playing the same role.

Our hypotheses are as follows:

  • Hypothesis 1 is divided into three options––namely:
  • ‐ Hypothesis 1a, based on prior studies, predicts a general pattern of exploration of the base domain (i.e., A, B) and then the target domain (i.e., C, T). Empirical data should confirm this pattern across analogy formats and complexity levels.
  • ‐ Hypothesis 1b predicts a higher number of alignment transitions (i.e., AC and BT) with more complex analogies or with the scene format (Experiment 2). Indeed scene might involve more search for equivalence between items (see above).
  • ‐ Hypothesis 1c predicts a mixture of projection and alignment. Under this hypothesis, participants would analyze in a back‐and‐forth manner the A‐B pair and C‐T and C‐D pairs. The A‐B relation would then be applied to the items in the target domain. Once this is done, they might compare A and C, or B and D as a means of verifying their answer.
  • Hypothesis 2 . Most models describe analogical reasoning in terms of a progressive convergence toward the relational, analogical solution (see French, 2002 , Gentner & Forbus, 2011 ) and assume in one way or another, that local, irrelevant, matches are disregarded by the end of the process (see Gordon & Moser, 2007 ; Rattermann & Gentner, 1998 , for discussions).
  • ‐ Hypothesis 2a predicts that participants will focus less on distractors at the end of the trial than at the beginning
  • ‐ Hypothesis 2b predicts that distractors will be compared to the other stimuli less often in easy (or simple) trials than in complex trials because the solution comes more easily to mind and, as a result, unrelated distractors can be ignored more quickly than other stimuli. The general notion of convergence toward a correct solution predicts that by the end of the trial, when a correct solution is chosen, most distractors should be deactivated.

3.2.1. Predicting analogy type, complexity, and performance: A machine learning approach

The second main goal of the paper was to use machine learning classification algorithms to identify a subset of item‐to‐item transitions (saccades) that might distinguish complex trials from simpler ones and correct trials over erroneous trials (see French et al., 2017 for a description of these techniques). By looking at various “transition profiles” (sets of saccades), we are able to predict with a high degree of accuracy whether the problem being solved is complex or simple. We used a simple and powerful classification technique, a support vector machine (SVM, Vapnik, 1999 , combined with a leave‐one‐out cross‐validation, LOOCV, see below). Using this technique, in French et al. ( 2017 ), we were able to predict whether an adult or a child was solving a particular analogy problem and, this, in the first third of the trial, based only on the distribution of the transitions (saccades). Using an identical approach in the present paper, we considered how well various transition profiles predicted the difficulty of a problem under scrutiny (see the Results section for more details).

  • Experiment 1 consisted of proportional analogies with words. We divided the analogies into two classes (“simple” and “complex”) according to their difficulty.
  • Experiment 2 was similar to the first experiment, with the difference being that we used scene analogies with two levels of difficulty (“easy‐simple,” one relation, and “complex,” two relations).

4. Experiment 1: Word analogies

The goal of this experiment was to test whether the level of complexity and the type of distractors (semantically related or unrelated distractors) would influence the time course of analogical reasoning. Previous experiments have manipulated the difficulty of analogy problems by changing the number of distractors, the number of potential solution options, the saliency of feature dimensions (the typical case being matrices of the Raven type, Bethell‐Fox et al., 1984 ), semantic distance (Green et al., 2010 ), the presence of cross‐mappings (Gentner & Toupin, 1986 ), and so on. For example, with semantic analogies, Green et al. ( 2010 ) (see also Kmiecik et al., 2019 ) studied the neural activation dynamics of brain processes during analogical‐judgment tasks, with more errors for semantically distant analogies (see also Bugaiska & Thibaut, 2015 ) or analogies constructed around weakly or strongly semantically associated pairs (see Vendetti et al., 2012 ; Vendetti, Wu, & Holyoak, 2014 ). Green et al. ( 2010 and 2012 ; see also Bendetowicz, Urbanski, Aichelburg, Levy, & Volle, 2017 ; Hobeika, Diard‐Detoeuf, Garcin, Levy, & Volle, 2016 ) have shown that frontopolar activation increases when the semantic distance between the A‐B and C‐D pairs in a “true/false” verbal analogy problem was increased. In general, an analogical transfer is more difficult when the conceptual domains involved are remote rather than close (Gick & Holyoak, 1980 ; Keane, 1987 ).

However, how analogy difficulty correlates with search strategies remains an open question. As summarized above, Thibaut and French ( 2016 ) followed a developmental approach. However, their adult participants saw only problems that could be solved by 6‐year‐olds, which did not allow the authors to directly test the effect of task difficulty. In the present experiment, our general hypothesis predicts that semantically more complex trials would produce different search strategies than simpler problems. Indeed, they may evoke several highly associated words (in the case of word analogies) that are not necessarily relationally consistent solutions and need to be inhibited in order to rerepresent the pair in terms of novel relations (see Collins & Loftus, 1975 ; Murphy, 2002 ; Steyvers & Tenenbaum, 2005 , for discussions of the notions of semantic networks).

In this experiment, the analogy difficulty was defined in terms of semantic relatedness. Simpler trials were trials in which the unifying semantic relation was relatively obvious compared to Complex trials. For example, a Simple trial such as cow:milk::hen:? has the relation produces and the solution, “egg,” can be compared to the more complex relation is‐a‐type‐of for violence:activity::gloom:? which was rated as complex (see Materials). Several hypotheses can be made regarding the time course of the trials in these cases with differing levels of difficulty.

The general predictions above would be that we should observe the same projection‐first profile in both complexity conditions. However, Complex‐problem items might elicit more alignments than Simple problems because a larger number of alignments might result from the difficulty required in establishing which stimuli are semantically equivalent while playing the same role in the two domains. Thus, a key purpose of the present experiment is to examine how these two strategies could combine.

Second, it has been claimed that object matches (i.e., surface‐similarity) are processed before relational matches (Goldstone, 1994a , 1994b ; Rattermann & Gentner, 1998 ). In the present study, strongly semantically associated distractors will induce surface‐similarity matching. Our Hypothesis 2, above, predicts that in Complex trials, participants will produce more transitions involving distractors (e.g., Semantic distractor transitions [CSemDis]) at the beginning of the trial than in Simple trials. This is because for Complex trials participants begin by considering semantically related options and because the semantic space is more open. Later on, following most models, we predict that distractors should receive much less attention in both conditions. We also predicted that the early imbalance observed in Simple trials in favor of AB compared to other transitions (more AB transitions than other transitions) should decrease in the complex case. Indeed, less obvious relations should elicit a more systematic search between C and solutions in order to find a solution when one is not immediately forthcoming.

Hypothesis 2 also predicts few gazes toward semantically unrelated distractors at any moment in a trial, except at the beginning when participants start to explore the available options. One exploratory question here is whether participants might check less obvious or less plausible solutions, especially in complex trials.

5.1. Participants

Participants were 20 students at the University of Burgundy ( M = 23.8 years; SD = 4.2; from 17 to 35 years). They participated voluntarily, gave their informed consent, and were unaware of the goals of the experiment.

5.2. Materials

The task consisted of 22 trials (two practice trials and 20 test trials) of a verbal A:B::C:D task. The test trials consisted of 10 Complex trials and 10 Simple trials. The presentation of Complex and Simple trials was random. Two practice trials were introduced before the 20 test trials.

Each trial was composed of eight words written in black ink on a white background, corresponding to the A, B, and C terms of the analogical problems, and five potential solutions. The solution set was composed of the Target (T), two semantically related‐to‐C distractors (SemDis), and two unrelated distractors (UnDis). Each word was presented in a black frame (220×220 pixels). The A, B, and C terms were presented in a row at the top of the screen along with an empty black frame (for the stimulus as the answer), and the five words composing the solution set were displayed in a row at the bottom of the screen (Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g002.jpg

Example of the display used in Experiment 1.

Twelve university students assessed trial complexity prior to the start of the experiment. They were asked to solve the different problems and to evaluate the difficulty/complexity of each problem on a 1–7 scale. Complex trials were rated as significantly more difficult ( M = 3.9; range 3.5–4.6) than simple trials ( M = 1.2, range 1.1–1.3; two‐sample related t ‐test: t (22) = 23.2, p < .001, η 2 p = 0.961).

The task was presented on a Tobii T120 eye‐tracker (17″ TFT monitor, resolution: 1024×768) with an E‐Prime (version 2.8.0.22) experiment embedded in a Tobii Studio (version 2.1.12) procedure to record participants’ eye movements.

5.3. Procedure

Test sessions took place in a dedicated soundproof room. Each participant was tested individually. The distance between each participant's face and the screen was approximately 70 cm. The task started with eye‐tracking calibration. The participants were then tested in the analogical reasoning task. The eight words constituting an analogy were displayed simultaneously (Fig.  1 ) and participants were given the following instructions during the first training trial: “Here are two words [pointing to A and B]. They go together well. Can you see why these two [A and B] go together?” Once the participant had given a relation, the experimenter confirmed it or corrected it and continued: “OK! Do you see this one [pointing to C]? What you have to do is to find among these five words [pointing to the solution set] the one that goes with this one [C] in the same way as this one [B] goes with this one [A]. So, if these two [A and B] go together because [giving the relation between A and B], which one goes with this one [C] in the same way?” When participants had given an answer, the experimenter asked them to justify their answer and gave corrective feedback when necessary. The test trials followed. Participants received no further instructions or feedback. Eye‐tracking data were recorded when the presentation of the problem started and stopped when an answer was given.

Before analyzing the time course of saccades (transitions) between objects, we checked whether Complex trials were, in reality, more difficult than Simple trials. The mean rate of correct answers was significantly lower in Complex problems than in Simple problems, t (19) = 4.9, p < .001, η 2 p = 0.558, with M = 100% and 79.5% correct for Simple and Complex trials, respectively. All errors were semantic distractor choices. Complex trials were also solved significantly slower than Simple trials, t (19) = 9.92, p < .001; η 2 p = 0.838, with M = 4794 and 12,748 ms for Simple and Complex trials, respectively.

6.1. Eye‐movement analysis

We rejected trials in which more than 50% of the gaze time was not recorded. Preliminary analyses we did when we started this project revealed that beyond a given percentage, loss of data did not affect the overall results. For example, using 30% of the data or 80% gave virtually identical results, that is, were perfectly correlated. With this criterion, only two problems were discarded from the entire data set. We focused on two complementary measures of interest, gaze duration and the number of saccades. Gaze duration (or looking times) for AOIs (i.e., gazes toward the items themselves) tells us which items are attended to and for how long they are attended to while solving the problem and witnesses the depth of processing of the item. Saccades between items (or switches, or transitions, i.e., when a participant goes, for example, from item A to item C) tell us which items are compared and can be interpreted as an attempt to find a relation between X and Y (see Duchowski, 2007 ). These two measures correspond to different aspects of the search, insofar as a participant can study an item for a large amount of time without comparing it with other items. Gaze duration, unlike transitions, tells us nothing about which items are compared during trials.

In order to compare Simple and Complex trials, we first focused on AOIs (gazes) and analyzed their distribution throughout the trial, a distribution which is expected to differ in the two types of trials. Then, we will move to saccades (i.e., “transitions”). In both analyses, we divided all trials into three equal slices (i.e., each slice being 1/3 of the total length of the trial), in order to capture differences in the temporal dynamic of Simple and Complex trials. Indeed, most studies do not analyze the temporal dynamics of trials. In preliminary studies that later led to Thibaut and French ( 2016 ) and French et al. ( 2017 ), we started with a finer five‐slice analysis which gave overly complex results (interactions) essentially similar to the ones reported here. Here, a three‐slice approach allows us to separate early explorations of the semantic space from late explorations in the trial which can be interpreted as decisional. Compared with diachronic analyses, such as scanpath analyses, which are supposed to find paths participants might systematically follow (see French et al., 2017 ; Hayes et al., 2011 ; Le Meur & Baccino, 2013 , for reviews and discussion), our analyses remain, to some extent, synchronic even though they focus on three different moments.

6.1.1. Gaze duration analysis (AOIs)

In this analysis, we focused on the proportion of time spent on each AOI depending on the complexity of the problem being solved. We analyzed AOI gaze duration, and the time course of gazes, splitting trials into three time slices. AOIs were of six types A and B, C and T, SemDis and UnDis (see above). We averaged the two unrelated distractors (UnDis) which played the same role in the design. The same was true for the two semantically related (SemDis) distractors. Then, we regrouped these six AOI types in four stimulus classes. A and B, the base domain, were averaged as one data point (i.e., A&B). The same is true for C and T(arget) (as C&T) which are the analogous stimuli in the target domain. The third class consisted of semantically related distractors (SemDis). The fourth class was comprised of unrelated distractors (UnDis), for which models predict that they should receive little attention, particularly toward the end of the trial. The resulting values were then transformed so that the sum of the four classes in a slice would amount to 33.33% (and so the three time slices would amount to 100%).

A three‐way repeated measure ANOVA, with Stimulus class (A&B, C&T, SemDis, and UnDis), Difficulty (Simple and Complex), and Slice (first, middle, and last) as within‐subject factors, was performed on the proportions of time spent on the four categories of AOIs to assess the temporal dynamics of fixations. The analysis revealed an expected main effect of Stimulus class, F (3, 57) = 22.56, p < .0001, η 2 p = 0.54), an interaction between difficulty and stimulus class, F (3, 57) = 7.1, p < .001, η 2 p = 0.27), an interaction between Slice and Stimulus class, F (6, 114) = 100.37; p < .00001; η 2 p = 0.84).

The most important result was the significant interaction between the three factors, F (6, 114) = 22.43, p < .0001, η 2 p = 0.54). Tukey HSD on individual slices revealed the following pattern ( p < .01). Slice 1  revealed that the simple condition had a significantly higher proportion of A&B and a lower proportion of fixation times for both types of distractors (SemDis and UnDis) than the complex condition. In Slice 2 , A&B was longer for the complex than in the simple condition. In Slice 3 , there was no difference between the simple and the complex condition.

The within‐condition analyses showed that in the simple condition, in Slice 1, A&B was significantly longer than C&T as well as both types of distractors, and C&T was longer than both types of distractors. In the complex condition, however, A&B was longer than the others but there was no significant difference between C&T and the two distractor types, revealing a flatter distribution of AOIs in this condition. In Slice 2, in the simple condition, C&T and both types of distractors had longer looking times than A&B, showing that participants were done with A&B. There were more SemDis than C&T, suggesting thorough explorations of semantically related distractors at this stage. In contrast, there was no significant difference between the four types of AOIs in the complex condition (except C&T > SemDis), suggesting a more balanced exploration of the four stimulus types in this condition. In slice 3, the proportion of C&T and SemDis gazes was higher than the proportion of A&B gazes. In addition, C&T was longer than both types of distractors in the simple case. In comparison, in the complex case, C&T was longer than A&B and UnDis, but not than SemDis, suggesting that in this condition, participants were still struggling with the semantic distractors. This profile will be confirmed by the analysis of the transitions (see Appendix A for the list of confidence intervals).

This is a fascinating pattern of interaction since, as Fig.  2 shows, overall, participants have a “flatter” search pattern in the complex condition in which they look significantly more at both types of distractors at the beginning of the trial than in the simple condition. In the simple condition, the second and third slices show that participants rapidly discarded A&B (i.e., understood the relation between them) and devoted a significantly greater amount of time to C&T and the distractors. The pattern is consistent with the hypothesis that they continue to check alternative solutions (Distractors) until the end of the trial, when they finally converge on the analogical solution. In the complex condition, the flatter distribution in the three slices suggests that participants test and check all possible solutions against A&B during the entire trial. We speculate that participants explored the entire space of solutions, including irrelevant ones, from the beginning of the trial to its end more thoroughly than in the simple condition. This is consistent with the idea that participants tested other interpretations of the AOIs when the first analysis of AB does not lead to an obvious solution.

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g008.jpg

Mean percentage of fixation time of each Type of Stimulus (AOIs) in the first, middle and last slice in Simple and Complex trials (error bars represent SEM). We collapsed A and B, C and T, respectively, in one score.

6.2. Transitions

We focused on a subset of five sets of transitions which have theoretical meaning (French et al., 2017 ; Thibaut & French, 2016 ). This subset was composed of the following transitions and sets of transitions. When several transitions were included, the resulting “transition” was the average of the component transitions.

Transitions are written in the following format, AB or CT are transitions between A and B or between C and T, and so on. AB are the average of saccades from A to B and B to A. We created the five following transition types.

‐{AB}, transitions between A and B and vice versa, within the source domain

‐{AC&BT}, an average of AC and BT switches. This represents alignments of equivalent stimuli in both domains that is A with C and B with T,

‐{CT}, transitions between C and T, in the target domain

‐{CSemDis, TSemDis}(hereafter C&T_SemDis), the average of CSemdis and TSemdis, are transitions between C or T(arget) toward the semantic distractors, SemDis, that have been shown by Thibaut and French ( 2016 ) to be important because they indicate how participants reach the solution through comparisons between C and the semantically related stimuli, the semantic distractors, and the target,

‐{CUnDis, TUnDis, SemDisUnDis}(hereafter, C&T&SemDis_Undis), which is the average of the transitions between C, Target, and SemDis toward the unrelated distractors, UnDis. This measure aggregates the stimuli which are associated with C and the semantically unrelated distractor, UnDis. It contributes to test a common prediction of most models that Unrelated distractors would be quickly discarded from participants’ search space, compared to semantically related stimuli (like C&T_SemDis).

The first three sets of transitions (AB, AC&BT, and CT) are crucial in determining whether participants follow projection‐first, constructive strategies (AB then CT(arget)), or alignment‐first strategies (a large number of AC and BT) or a combination of both, depending on the moment of the trial. The last two transition types, C&T_SemDis and C&T&SemDis_Undis, will tell us when participants are focusing on T and the semantic or unrelated distractors. The four models presented earlier predict that transitions to the unrelated distractors, that is, Undis, should remain rare, especially at the end of the search, because the system converges on the correct solution.

We ran a three‐way repeated‐measure ANOVA on the log transformation of the number of transitions, which were not normally distributed, with Transitions (AB, CT, AC&BT, C&T_SemDis, and C&T&SemDis_Undis), time Slice (first, middle, and last), and Condition (Simple and Complex) as within‐subject factors. Results revealed a main effect of complexity, F (1, 19) = 45.01, p < .0001, η 2 p = 0.70; a main effect of slice, F (2, 38) = 27.68, p < .00001, η 2 p = 0.59; a main effect of transitions, F (4, 76) = 211.28, p < .00001, η 2 p = 0.92. It also revealed an interaction between complexity and transitions, F (4, 76) = 12.50, p < .0001, η 2 p = 0.397; slice and transitions, F (8, 152) = 66.44, p < .0001, η 2 p = 0.78. The most important result was the significant interaction between these three factors, Type of Transition, Condition and Slice, F (8, 152) = 12.89, p < .0001, η 2 p = 0.40 (Fig.  3 ).

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g005.jpg

Log of the mean number of each type of saccade in each slice in simple and complex trials (error bars are SEM).

In the post‐hoc comparisons (Tukey HSD), we retained only significant differences below p < .005 (see Appendix A for the confidence intervals per condition).

They revealed that, in Slice1 , the comparison between simple and complex analogies revealed higher rates of CT and C&T&SemDis_Undis transitions in the complex than in the simple problems ( p < .0001). Within‐condition comparisons revealed higher rates of AB transitions than all the other transition types for both Simple and Complex analogies ( p < .0001). In Complex analogies, there were significantly more CT and, surprisingly, more C&T&SemDis_Undis, than AC&BT transitions, and more C&T&SemDis_Undis than C&T_SemDis ( p < .0001). The last two results show that transitions involving the unrelated distractors were more numerous than transitions involving stimuli that were key to solving the analogy in complex trials. Again, this suggests that participants systematically compared unrelated distractors to C and to related items in the complex trials.

In S lice2 , there was a significantly higher rate of AB transitions in complex than in simple problems. Within‐condition analyses in the simple trials revealed higher rates of AB than AC&BT and C&T_SemDis ( p < .001), and surprisingly, more C&T&SemDis_Undis transitions than all other types of transitions ( p < .0001), and more CT than AC&BT. Thus, again AC&BT transitions were lower than the others ( p < .0001). In the complex trials, there were significantly higher rates of AB transitions compared to other transition types, and fewer AC&BT than all the other types, and more C&T&SemDis_Undis than CTSemDis and CT transitions. Again, C&T&SemDis_Undis were at a high level in both conditions.

In S lice3 , comparing complex and simple trials revealed significantly higher rates of AB and C&T_SemDis ( p < .0001) in the complex trials which meant that, because finding a solution is difficult, participants continued to explore, check, and/or rerepresent the relation between A and B even at the end of the trial. Within‐category comparison trials showed that, in simple trials, there were more C&T&SemDis_Undis than AC&BT, CT and C&T_SemDis, that is, transitions involving the solution set ( p < .0001), and more AB than AC&BT transitions. In complex trials, there were higher rates of AB transitions than AC&BT and CT ( p < .0001), the latter is compatible with the idea that participants needed to come back to A&B more often in order to assess the consistency of their answers. There were also more C&T&SemDis_Undis than AC&BT and, importantly, CT ( p < .0001), and fewer AC&BT transitions than all the other types ( p < .0001).

The Tukey HSD post‐hoc analysis also tells us, when we compare the first and third slices for simple versus complex problems, there were significantly more C&T&SemDis_Undis transitions in the third slice than in the first slice in both complexity conditions ( p < .0005). This suggests that participants first concentrated on AB and applied the corresponding relation to CT, then, progressively checked other potential solutions, even though irrelevant, which is not predicted by the standard convergence views of analogical reasoning we reviewed in the Introduction. Tukey post‐hoc analyses also revealed significantly more C&T_SemDis transitions in the third slice than in the first slice in both complexity conditions ( p < .005), suggesting a later check of the semantic distractors in both complexity cases.

7. Intermediate summary

AB transitions initially dominated in both simple and complex trials. Progressively, however, participants studied the solution set together with C. This suggests that more comparisons of the relations were necessary in the complex case. Beyond that, importantly, there were virtually no AC or BT transitions in the three slices, which argues against a strict interpretation of alignment‐first models.

A second result was that comparisons involving unrelated distractors remained frequent at the end of the trial in both difficulty conditions, and more frequent than the relevant C&T_SemDis transitions, although, interestingly, they were more frequent at the beginning of the complex trials, suggesting that participants were exploring these options during an entire complex trial. This is not predicted by the “parallel‐terraced scan” view of analogical reasoning, nor other type of models we are aware of. The analysis of gazes confirmed these results, with a flatter profile between AOIs in the complex condition, suggesting a more balanced approach to the AOIs across time slices than in the simple condition.

7.1. Discriminating between Simple and Complex conditions: SVM+LOOCV

As mentioned in the Introduction, our purpose was to identify small sets of transitions that defined a particular search‐space exploration strategy . By this, we mean that the numbers of each transition‐type in these small sets of transitions defined a particular search strategy and our hope was that we would be able to predict whether the problem being solved was Complex or Simple, or whether the problem would be solved correctly or incorrectly, based on the observed strategy (French et al., 2017 used a similar methodology to predict whether adults and children were doing a problem or whether the problem would be answered correctly or erroneously). SVM allows us to find small subsets of transitions that allow us to predict with a high probability whether the problem being solved was complex or simple, or would be solved correctly or incorrectly.

We hypothesized that small subsets of transitions used during the problem were indicative of a search strategy and could be used to predict whether a participant was doing a Complex or a Simple problem. We selected small subsets of transitions from the following set of 13 transitions: {AB, AC, BT, BC, CT, AT, CSemDis, TSemDis, A&B_UnDis, A&B_SemDis, CUnDis, TUnDis, and SemDisUnDis).

The only grouped transitions were A&B with SemDis, and A&B with UnDis. This is because A and B have the same status in the above pair. The same was done for ASemDis and BSemDis. In contrast with our behavioral analysis, we dissociated CSemDis, TSemDis, and transitions with UnDis, because the predictability of each transition type might differ. Also, SVM algorithms allow us to look at a relatively high number of transitions, and we took advantage of this.

We used the normalized number (by trial) of transitions as input to our SVM‐LOOCV classifier. Note that relatively less frequent transitions may, in principle, contribute to distinguishing conditions if they appear in one condition and not in another condition and, on the other hand, frequent transitions in both conditions may not contribute to differentiate them.

We felt that small subsets of these transitions (three or fewer transitions) provide more search‐strategy information than large subsets. For example, if we take a large enough set of transitions, the SVM algorithm would almost certainly have been able to discriminate Complex problems from Simple problems on that basis. But too large a number of transition‐types tells us little about the search‐strategy used by a person to solve the problem. Hence, we considered small subsets of three or fewer transitions that had the highest discriminative power.

We predicted that participants would have differing numbers of these transitions or subsets of transitions depending on the problem complexity and that these differences would predict whether the problem under consideration was Simple or Complex, or in a second analysis, whether it was an error or a correct trial.

In order to do this, we coupled the SVM with an LOOCV (Geisser, 1975 ; Miller, 1974 ; Stone, 1974 ; for a review, Arlot & Celisse, 2010 ), which, in the case of our analysis of transition profiles, worked as follows. We selected one of the N problems solved by the participants and set that problem aside. Then, for each of the remaining N– 1 problems, which we designated as the SVM training set, we considered various sets of transition profiles such as {BT, TSemDis, CSemDis}. We then counted the number of each of these transitions made while solving the problem, averaged over all participants. We trained the SVM using these vectors for each problem in the SVM training set until it learned to correctly classify each of the N– 1 problems in the training set as “Simple” or “Complex” (or as “error” or “correct”) in our second analysis. We then gave the SVM the problem that it had not seen and saw if it was capable of classifying it correctly as either “Simple” or “Complex,” based on how the other N –1 problems were classified. We applied this leave‐one‐out training‐and‐testing procedure to all of the problems. We applied this reasoning to the first time slice and the third time slice, corresponding to the beginning and the end of the search. We arbitrarily set at 0.75 the level of “good predictability” (classifying three trials out of four in the correct category, either Complex or Simple). When considering pairs or triplets of transitions, we kept only those transitions that increased the level of predictability of smaller subsets of transitions. For example, if AB alone had a predictive power of 0.75, we would consider only pairs of transitions involving AB (e.g., AB CT) if their predictive power was higher than AB alone (e.g., 0.80).

For the Simple‐Complex discrimination during Slice1 , the SVM‐LOOCV analysis gave the following prediction accuracy scale (Table  1 ). No single transition was above 0.75. AB and A_UnDis and B_Undis were at 0.66 and 0.63, respectively, confirming that transitions involving A and B were (relatively) important at the beginning of the trial (AC and BT were at 0 and 0.13, respectively).

Transition sets and individual transition frequencies for the first slice of Experiment 1

Slice 1 complex
One transitionTwo transitionsThree transitions
Success rateTransition setSuccess rateTransition setSuccess rateTransition set
0.66AB0.9AD/BDAN/BN0.81ABTDDN
0.63AN/BN0.77AN/BNDN0.81ABCNDN
0.75ABAD/BD0.8ATAD/BDTN
Individual transition frequencies0.78ABCTDN
AB10.78ABTDTN
AD/BD20.78ABATAD/BD
AN/BN20.78CTBCTN
DN10.77BTCTCD
0.77ATAN/BNTN
0.75ABACTD
0.75ABBTDN
0.75ABCDTD
0.75ABCDDN
0.75ABCNTN

Individual

transition

frequencies

AB10
DN5
TN5
TD4
CT3
AT3
CD3
CN2
BT2
AD/BD2
AN/BN2
BC1
AC0

Note : D = SemDis; N = UnDis. AN/BN and AD/BD designate the average number of AN and BN transitions and AD and BD transitions, respectively.

Prediction accuracy increased for sets with two transitions and reached our criterion for three pairs of transitions. {AB and A&B_SemDis} (0.75), {A&B_UnDis and SemDis_UnDis} (0.77), and {A&B_SemDis with A&B_UnDis} (0.90). This suggests that transitions involving A&B and distractors, both semantically related and unrelated, significantly improve prediction accuracy. Once again, this is compatible with the idea that the search space was broader from the start in the case of complex analogies.

When we added a third transition, we found that there were 14 triplets that produced correct Complex‐Simple classification at 0.75 and above. In these triplets, AB was involved in a total of 10 of them. Interestingly, next on the list were five triplets of transitions containing SemDis_UnDis (DN in Table  1 ) and five subsets containing T_UnDis (TN in Table  1 ), confirming that, from the start, the Complex condition was characterized by a broader search space, that is, one involving more items, including semantically unrelated ones. This strongly suggests that the difference between Simple and Complex trials is a matter of finding the AB relation and realizing that the two types of distractors (UnDis and SemDis, T and N, respectively, in Table  1 ) are not the answer (T). At this stage, AC_BT had no discriminative power.

For Slice3 , the most discriminative single transitions were AB and CSemDis, at 0.67. Looking at the 11 pairs of transitions beyond 0.75 showed that AB (4) and CSemDis (8) were particularly predictive, meaning that CSemDis played a central role in reaching a solution at the end of the trial. Adding a third transition gave a total of 22 sets of transitions producing a discrimination accuracy above 0.75. Once again, AB was involved in 13 out of the 22 triplets. The other important transitions were CSemDis (14) and CUnDis (9). This result suggests that, by the end of the trial, Complex trials are characterized by transitions involving C on the one hand, and SemDis and UnDis, on the other hand. This confirms and extends previous results (French et al., 2017 ; Thibaut, French, Missault, Gérard, & Glady, 2011 )––namely, that complex problems involve more comparisons between C and both types of distractors, and continued focus on AB transitions. This, together with seven A&B_UnDis transitions (shown as AN/BN in Table  2 ) confirms that participants saccaded to unrelated distractors more in the complex problems, suggesting that they found it difficult right up to the end of the trial to decide whether unrelated distractors were a solution. The presence of AB, of transitions between AB and other AOIs at the end of the trial, is compatible with the idea that returning to the AB transition occurs more frequently in the Complex case, thereafter comparing C with the Target, SemDis, and UnDis. As far as we know, this is not predicted by any current model of analogy‐making. Current models predict that, as participants get closer to answer selection, the number of saccades to semantically unrelated distractors should fall essentially to zero since they have, presumably, made their decision and will no longer need to saccade to semantically unrelated items. This is clearly not the case.

Transition sets and individual transition frequencies for the third slice of Experiment 1

One transitionTwo transitionsThree transitions
Success rateTransition setSuccess rateTransition setSuccess rateTransition set
0.67AB0.88BTCD0.94BTCDCN
0.67CD0.81CDDN0.88ABCTCD
0.8ABCD0.87ABCDCN
0.77ABBC0.84ABCTAN/BN
0.77CTCD0.84CDCNDN
0.77CDBC0.84ABACCD
0.77CDAT0.83ABCDAN/BN
0.77CDAD/BD0.81ABBTCT
0.75ABBT0.81ABCTCN
0.75ABCT0.81CDCNTN
0.75CDTN0.81CDBCAD/BD

Individual

transition

frequencies

0.78ABACBT
AB40.78ABACCT
CD80.78ABBTTD
CT20.78ABBTAN/BN
BT20.78ABBTCN
BC20.78CDTDAN/BN
CT20.77CDAD/BDCN
AD/BD10.77ACCDAN/BN
AC00.77CDAN/BNCN
0.75ACCDTD
0.75CDTDCN

Individual

transition

frequencies

CD14
AB13
CN9
AN/BN7
BT6
AC5
TD4
CT4
AD/BD2
DN1
TN1
BC1
AT0

This confirms the previous (behavioral) analysis and suggests that, for complex problems, which require the space of possible solutions to be explored more thoroughly, participants look at all potential solutions, including the unrelated distractors throughout the course of the problem .

Even though we do not provide the behavioral analyses, we ran an identical SVM+LOOCV analysis to see which strategies led to correct or incorrect answers in the two groups of problems, to parallel the ones provided by Thibaut and French ( 2016 ) and French et al. ( 2017 ). Indeed, finding that these two types of conditions had different profiles in adults would be interesting, as French et al. ( 2017 ) focused on children only. The idea is to find out whether correct answers have their own signature, compared to errors, and whether an error profile can be detected from the start.

In Slice1 , CUnDis was the best predictor (0.63) followed by CSemDis (0.62) and CT (0.60) and A&B_SemDis (0.60), showing that transitions involving both distractors contributed to distinguish Error trials from Correct trials (all Complex since there were no errors in the Simple condition). No pair reached our expected level of 75%, and only one triplet reached it, AT, BT, and A&B_UnDis (0.80), that is, pairs involving A and B with the target and the distractors. Thus, relating A and B to the solution and comparing them with unrelated distractors was a crucial factor distinguishing the two types of trials, early on in the trial.

In Slice3 , CSemDis was a good unique predictor (78% of accuracy) followed by A&B_SemDis (0.65) and TUnDis (0.62). For pairs of transitions, CSemDis appeared nine times out of 15 pairs, confirming that controlling SemDis is a major feature of correct answers. All other transitions were evenly distributed and less frequent (three or less). Among the 31 triplets beyond 0.75, 29 included AB (AB, A&B_SemDis, and A&B_UnDis), and 38 transitions contained SemDis. This, again, argues in favor of the importance of a correct encoding of AB and careful control of SemDis against other options. Importantly, it should be noted that there were 13 transitions containing UnDis were also present, confirming the role of UnDis in taking a decision. Thus, by the end of the trial, including AB and SemDis in the decisional process seems to be a major feature distinguishing correct from error trials.

This result parallels what we observed in the prediction of whether a problem was Simple or Complex. Distinguishing between correct and error trials heavily relies on AB and on the exploration of transitions to semantically related distractors but, also, to unrelated distractors until the end of the problem. They also confirm what Thibaut and French ( 2016 ) found for children: errors and correct trials differ in their signature.

8. Discussion

Our study extends previous eye‐tracking studies using analogies involving words rather than images. Bethell‐Fox et al. ( 1984 ) used analogies defined around perceptual dimensions, whereas Gordon and Moser ( 2007 ), Thibaut and French ( 2016 ), Vendetti, Starr, Johnson, Modavi, and Bunge ( 2017 ; Starr et al., 2018 ) used pictures of objects or scenes.

Our data confirmed the projection‐first strategy, that is, there were significantly higher rates of AB and CT saccades in participants' patterns of visual search compared to AC and BT saccades, for both Complex and Simple conditions. Thus, participants mostly infer the relation between the pictures in the A:B pair and apply it to the C‐solution set (CT saccades). Neither Simple nor Complex problems elicit a significant amount of AC or BT saccades, that is, alignments, or any sign of relational priming as postulated by Leech et al. ( 2008 ).

The second purpose was to assess the impact of trial difficulty. In the second and third slices, participants tried multiple hypotheses in order to make sense of the analogies, or tried to rerepresent the relation between A and B after testing their initial hypotheses more often in the Complex condition than in the Simple one. This is consistent with the idea (Bethell‐Fox et al., 1984 ) that at this stage no response has yet been eliminated, not even the nonsemantically related distractors (UnDis). The lower proportion of CT transitions in complex trials makes sense since participants are checking all of the items to make sure that, indeed, T is the correct solution. By contrast, for simple trials, response uncertainty is low and it is, therefore, not necessary to check the Unrelated Distractors.

However, the most unexpected result of the present work was the significant number of transitions toward unrelated distractors, together with semantic distractors, which, as shown by the SVM analyses, also had a high discriminating power when comparing Simple and Complex problems. In the Complex condition, the number of these transitions was greater than the mean number of CT transitions alone or the mean number of AC&BT transitions, or even the mean number of transitions from C or T to the semantically related distractors. This is predicted by none of the models of analogical reasoning that we are aware of . This makes sense if one considers that deciding that something is unrelated is especially hard in a Complex case or when confidence is lacking in the correct target relation. These controls, however, were less expected for unrelated distractors at the end of the trial than at its beginning. In short, unrelated information should have been discarded earlier on. The analysis of the AOIs revealed a flatter gaze profile in the complex condition across time slices, showing that participants distributed their looking times evenly across stimulus types when trials were more difficult.

Our main conclusion is that the task difficulty influenced the time course of the trial. Even though Complex and Simple trials resemble one another (e.g., same AB transitions at the beginning), they also differed in specific ways. In general, the time course of our verbal analogies was similar to previous results (e.g., French et al., 2017 ; Thibaut & French, 2016 ). Complex trials generated more exploration of the distractors and of the A‐B pair, particularly at the end of the trial, which was unexpected.

8.1. Experiment 2: Scene analogies. Comparison between two‐relation and single‐relation problems

The present experiment is an extension of the Scene analogy used by Richland et al. ( 2006 ). We compared two types of problems, one with two relations (Complex) and the other with a single relation (Simple). As in Experiment 1, the main focus is on problem complexity, which can induce novel strategies or at least can lead to search‐strategy adaptations in different contexts. This implies focusing on certain questions, such as when AC and BT alignments take place, whether distracting or irrelevant information is looked at and rapidly discarded or whether it is processed throughout the trial, and so on. It can be argued that for more complex problems, it is harder to establish the items that play analogous roles in order to arrive at a solution. Our hypotheses are similar to the ones in Experiment 1.

8.2. Participants

Participants were 25 students at the University of Burgundy ( M = 21.6 years; SD = 2.2; range = 19–26 years). They participated voluntarily or for course credit and were unaware of the experimental rationale. All participants had a normal or corrected‐to‐normal vision. In the latter case, it was checked that glasses did not interfere with data collection.

8.3. Materials

The task consisted of 14 trials (two training trials and 12 test trials, six Simple problems and six Complex problems). The scenes were the same as those used in Richland et al. ( 2006 ). Their list of stimuli was slightly adapted for the present experiment. Here, we used only the “distractor” condition used in Richland et al., which means that we did not have a “no‐distractor” condition. All trials were composed of two scenes, a base scene in the upper panel of the figure and a target scene in the lower panel (Fig.  4 ). A distractor was chosen from the base scene and, in a slightly modified form, was added to the target scene. This distractor in the target scene was both visually and semantically related to one item in the relation in the base picture. For example, there was a cat in the base scene (i.e., a cat is chasing a mouse) and there was also a cat in the foreground of the target scene that depicted “a boy chasing a girl.” There were two levels of complexity. The first we called “simple” and consisted of a single relation (Fig.  4a ) and the second one was called “complex” and consisted of two relations (Fig.  4b ). In the six simple problems, both scenes depicted a single interaction between two entities (e.g., a cat and a mouse), whereas, in the six complex problems, both scenes depicted an interaction between three entities (defining two relations, e.g., a dog chasing a cat that was chasing a mouse, see Fig.  4 ). Participants had to determine the item in the lower drawing that best corresponded to the item in the upper drawing that was indicated with an arrow. The order of presentation of the test trials was random.

Each trial consisted of two scenes (501×376 pixels for each scene) each containing either five black‐and‐white (BW) line drawings framed by a black rectangle in the single‐relation problems, or six BW line drawings for two‐relation problems. The scenes were displayed on a Tobii T120 eye‐tracker (120 Hz) with 1024×768 screen resolution using an E‐Prime © software (version 2.8.0.22) embedded in a Tobii Studio (version 2.1.12) procedure to record participants’ gazes.

These stimuli were labeled (for the purposes of data analysis, not in the experiment itself) A, B, C, T(arget), and Dis (the Distractor that was perceptually and semantically similar to the object designated with an arrow in the upper scene). In the single‐relation condition case, the mouse and the cat were A and B, respectively, as shown in Fig.  4a , and the dog played no role in the targeted “chasing” relation. In the lower scene, the woman was standing still on the left, playing no role in the relation between the boy and the girl. In the two‐relation scene, two stimuli played the role of A in Fig.  4b —namely, the mouse and the dog––and two stimuli played the role of C (the woman and the girl). The cat in the foreground of both Figs.  4a and ​ andb b was the distractor (Dis).

Fig.  4a depicts a one relation––simple––problem and Fig.  4b depicts a two‐relation––complex––problem, with the base scene in the upper panel and the target scene in the lower panel. The arrow in the upper scene points to a stimulus (the cat) and the participants must find the relational equivalent of the cat in the lower scene. By convention (see text), the nontargeted objects in the upper scene were called A (the dog and the mouse) and the designated object was called B. In the scene in the lower panel, C designated the woman and the girl and the target T was the boy. The cat in the lower panel is the distractor (Dis). Note the differences with the Simple condition (Fig.  4a ) in which the stimuli are the same, with the exception that neither the dog nor the woman is participating in the action (the dog is in the doghouse and the woman is standing still to the left of the scene).

8.4. Procedure

Test sessions took place in a quiet experimental room in our laboratory. Each participant was tested individually. The distance between the participants’ face and the screen was approximately 70 cm. After the eye‐tracker was calibrated, participants were tested in the Scene analogical reasoning task. Participants were first shown a practice trial. When they had given an answer, the experimenter asked them to justify their answer and provided feedback. In the event of an incorrect justification, the trial was explained in terms of the relations linking A and B on one side, and C and T on the other side. For the test trials, participants received no further feedback or information about whether they had replied correctly or not. Eye‐tracking data were recorded when the presentation of the problem started and stopped when an answer was given.

Before analyzing the time course of eye movements, we checked whether Complex trials were, in reality, more difficult than Simple trials. The mean number of correct answers was significantly lower for Complex than for Simple scenes: t (24) = 2.7, p = .015; η 2 p = 0.22, with M = 83.3% and 73.3% correct for Simple and Complex scenes, respectively. RT analyses showed responses to Complex scenes were significantly slower than for Simple scenes: t (24) = 7.87, p < .01, η 2 p = 0.25, with M = 4775 and 5925 ms for Simple and Complex scenes, respectively. Thus, the two conditions do, indeed, differ in difficulty, which, as intended, raises the question of differences in search strategies.

9.1. Eye‐movement analysis

No trials were rejected because of insufficient gaze time (i.e., more than 50% of the gaze time was not recorded). As in the first experiment, the dependent variables were the percentage of total looking time and the number of saccades. As in Experiment 1, we first analyzed fixation durations (gazes) on each stimulus type (AOI) and, second, in order to compare the Single‐relation and Two‐relation trials, we analyzed the number of transitions and focused on the distribution of key saccades throughout the trial. As before, we divided all trials into three equal time slices in order to capture differences between the two conditions temporal dynamics.

9.1.1. Gaze (AOI) analysis

As in Experiment 1, we performed an analysis on fixation durations on five AOIs (A, B, C, T, and Dis, see below). We defined the AOIs as A for the nontargeted item of the relation (e.g., in Fig.  4a and b , A is the mouse), and B for the targeted item (the stimulus pointed to by the arrow). In the two‐relation (complex) case, we computed the mean for the two nontargeted objects under A (e.g., in the example above, the mouse and the dog). In the below scene, C was the nonrelational solution involved in the relation (e.g., the girl in Fig.  4a ). In the two‐relation case, C was the aggregation of the two nontargeted stimuli in the relation (e.g., the woman and the girl in Fig.  4b ). T(arget) was the correct relational answer. The distractor (Dis) was the same stimulus as stimulus B from the above scene (i.e., the cat). This distractor (Dis) is an important stimulus to be checked since it was meant to be perceptually and semantically related to the designated stimulus in the base (i.e., upper) scene and, thus, to attract participants’ attention in the target (lower) scene. Note that it is a distractor because it is perceptually and semantically related to the equivalent stimulus in the base scene. For each participant, the time spent on a given AOI (e.g., A) was defined as the mean proportion of looking time spent on this AOI across the six trials defining a condition.

9.1.1.1. Fixations

We focused on the time course of fixations toward A and B compared to C and T, and on the difference between Complex and Simple trials in each time slice. A three‐way repeated measure ANOVA, with Type of Stimulus (A, B, C, T, and Dis), Complexity (One‐relation and Two‐relation), and Slice (first, middle, and last) as within‐subject factors, was performed on the mean percentage of gazes for the five AOIs in order to assess the temporal dynamics of rates of fixations. The analysis revealed a main effect of AOI, F (4, 92) = 122.3, p < .0001, η 2 p = 0.84; a main effect of Complexity, F (1, 23) = 3918, p < .0001, η 2 p = 0.99 interactions between AOI and Slice, F (8, 184) = 30.63, p < .0001, η 2 p = 0.57, and between AOI and Complexity, F (4, 92) = 3.70, p < .01, η 2 p = 0.14, which was the most interesting result. The triple interaction was not significant ( p > .1). As Fig.  5 shows, for the AOI × slice interaction, participants mostly gazed at B at the beginning of the trial with fewer gazes at Target, followed by the reverse pattern later on. Note, interestingly, that A and C received fewer gazes. This was confirmed by Tukey HSD comparisons, showing that there were more B in slice 1 than in slices 2 and 3, more T in slice 3 than in slices 1 and 2 and in slice 2 than in slice 1. As for the intra slice pattern, it was similar to the pattern observed in the following interaction and we will not repeat it.

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g004.jpg

Time slice by AOI interaction. It shows that participants were gazing at B in the first slice and also to the target. This pattern was reversed at the end of the trial. A, C, and distractors were less frequently focused on.

As for the Complexity × AOI interaction, Fig.  6 shows that both conditions had a similar gaze profile. B and T received a majority of gazes, and all the others far fewer. This was confirmed by an a posteriori Tukey HSD which showed that there were significantly fewer gazes toward A, C, and Dis in both the simple and the complex conditions and significantly more looks toward the target than all the other stimuli in both conditions. In the complex condition, there were significantly fewer gazes toward the distractor than toward the other stimuli. The comparison between the complex and the simple conditions proved nonsignificant for the five stimulus types. In sum, gazes revealed that B and T overwhelmingly dominated attention with a smooth transition from B to T from the first to the third slice, with C receiving intermediate attention (recall that the scenes came from a corpus initially targeted at children, thus of moderate difficulty).

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g007.jpg

AO× complexity interaction.

9.1.2. Transitions

In this analysis, we were primarily interested in sets of transitions that revealed strategy differences (if any) in solving Simple versus Complex scene problems. Six transition types were considered––namely: AB, CT, AC&BT, and C&T_Dis were also used in Experiment 1, with AC&BT being defined as the average number of AC and BT transitions. These two transitions were predicted by an alignment‐first model because they involve alignments between equivalent stimuli in the two scenes. We also introduced two new types of transition, AT&BC, which was the average of AT and BC transitions. It plays the role of a control transition because, according to existing models, it should have no central role in solving the analogy and, hence, should be less frequent than AC&BT transitions, which are key transitions in some of the models. A second new type of transition was BDis. It refers to transitions between B and Dis. Richland et al. ( 2006 ) have shown that they are important because they refer to transitions between B, the stimulus pointed to in the base‐above scene (e.g., a cat), and the equivalent stimulus in the target scene (e.g., another similar cat) and Dis is the most common mistake children made in their experiment. C&T_Dis is the average of transitions involving C and T toward the semantic‐perceptual distractor, Dis, the cat in the foreground, with T being the relational Target. A three‐way repeated‐measures ANOVA, with Transitions (AB, CT, AC&BT, AT&BC, BDis, and C&T_Dis), Complexity (One‐relation and Two‐relation), and Slice (first, middle, and last) as within‐subject factors, was performed on the log of the number of transitions (not normally distributed). The analysis revealed a main effect of Complexity, F (1, 23) = 13.72, p < .001, η 2 p = 0.37; of transition‐type, F (5, 115) = 39.16, p < .0001, η 2 p = 0.63. There were interactions between transitions and complexity, F (5, 115) = 8.96, p < .0001, η 2 p = 0.28 and between transitions and slice time, F (10, 230) = 3.02, p < .005, η 2 p = 0.121, the former interaction being the most interesting.

The Complexity × Transition‐type interaction is interesting (Fig.  7 ) as it shows that there are more transitions for Complex scenes, for AB, for AT (i.e., A‐Target), for BDis, and for CT. Once again, we found no evidence of AC or BT alignments. This is particularly interesting because there is a considerable comparison of A and the Target, but participants did not compare B with T, which is unexpected. BDis transitions make sense because B is the cat in the upper scene and semantically similar “cat” (Dis) in the lower scene. These are presumably perceived as the “same” stimuli.

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g001.jpg

Interaction between Complexity and Transitions. See text for transition names. (Error bars are SEM.)

The typical transitions, again, seem to be AB and then CT. The surprising feature is that the passage from the upper scene to the lower scene occurs via a transition from A (in the upper scene) to Target (in the lower scene). This was observed in both Complex and Simple conditions, even though there were more AT transitions in the two‐relation Complex condition. Appendix A provides the confidence intervals corresponding to this interaction.

This was confirmed by a Tukey HSD post‐hoc. As in Experiment 1, we only retained p values under .005. Results showed that there were significantly more AB and AT&BC in the Complex condition compared to the Simple condition ( p < .0005).

Comparisons within conditions showed, in the Simple condition, that there were significantly more AB, CT, and BDis transitions than AC&BT, AT&BC, and C&T_Dis transitions, and surprisingly, more AT&BC than AC&BT ( p < .005). In the Complex condition, there were significantly more AB and CT transitions than AC&BT and C&T_Dis, more AT&BC, and BDis, than AC&BT and C&T_Dis ( p < .005). Overall, Simple scenes required fewer overall comparisons to establish the relationship between A and B, as being the key to solving the problem. Surprisingly, the two scenes aligned along nonanalogous stimuli––namely, A with T and B with C. Both conditions were organized around AB, CT, and BDis transitions, overall. As in the first experiment, the very small number of AC and BT transitions does not fit with the predictions of alignment‐first models, and the presence of AT&BC transitions is not predicted by any model we are aware of.

The Slice × Transition Interaction (Fig.  8 ) shows that the relevant transitions are produced during all three slices, except that the order slightly differs from one transition to another. A Tukey HSD post‐hoc analysis showed, for slice 1, there were significantly more AB and CT transitions than AC&BT, C&T_Dis, and fewer AB than BDis ( p < .005). Importantly, as mentioned above, there were few AC and BT transitions, which is difficult for an alignment‐first model to explain, and few CDis and TDis transitions, which could be interpreted as meaning that participants saccaded only rarely to the Distractor item within the target scene. In contrast, BDis transitions show that the targeted item in the base and its perceptual/semantic counterpart were extensively compared.

An external file that holds a picture, illustration, etc.
Object name is COGS-46-e13208-g003.jpg

Interaction between Transitions and Slice. (Error bars are SEM.)

Fig.  8 shows that the second and third time slices had similar patterns of transitions, with significantly less AC&BT and CD‐TDis than all the others and less CT than BDis. For Slice 3, there were also significantly less AT&BC than BDis and than CT ( p < .005). Overall, participants compared AB and CT, which were dominant across slices together with BDis, the latter becoming somewhat less important at the beginning of the trial (i.e., no significant difference with CT in the second and third slices, but in the first one). This is surprising since, given the dominant theories, the distractor should be discarded soon after the beginning of the trial, whereas data suggest that participants continue to saccade to it across the two scenes at the end of the trial. For all time slices, essentially no AC or BT transitions were observed.

9.2. Classification prediction based on subsets of transitions

We used the same SVM+LOOCV methodology as in Experiment 1. We started with a broad set of transitions in order to find out transitions, or pairs or triplets of transitions that would differentiate Simple from Complex scenes. Again, in comparison to the behavioral analysis characterizing children's focus with a subset of main transitions as a function of the time slice, SVM analyses also give combinations of transitions with predictive power, not necessarily the most frequent ones. The set of nine transitions was AB, AC, BC, AT, BT, CT, BDis, CDis, and TDis. One purpose is to look at transitions involving Dis (the semantic‐perceptual distractor in the below scene). In slice1, BDis was the most predictive transition (0.58) not at a high level, though. Three pairs were beyond 0.75, all involving Dis (CDis and TDis) and two CT. As for the two triplets, they confirm the presence of Dis in CDis (twice) and TDis. Thus, early control of Dis (with C and T) in the solution (target scene) space seems to be important. In Slice 3, no single transition approached our fixed threshold. A pair composed of BDis and CT, and two triplets both with BDis (2) CT (2) confirmed the predictive power of transitions involving BDis up to the end of the trial. The suggested pattern is that, by the end of the trial, participants compared C and T in order to decide which of the two is the correct solution. However, a parallel and last check that Dis is not the solution confirms that participants go for the distractors until they reach a decision. AC and BT had very low discriminative power alone (<0.30) and were not found in highly discriminative pairs or triplets.

10. Discussion

Experiment 2 sought to establish search profiles for two levels of relational Complexity in Scene analogies, and to ascertain whether participants would adapt their search strategies to the level of difficulty of the problems. Unsurprisingly, there was a main effect of the number of transitions with more transitions for Complex than for Simple problems. We observed relatively more AB transitions decreasing over time, while CT transitions increased and, again, we found no evidence of AC and BT alignment, even in the Complex condition. The transitions between B (in the base scene) and the Distractor (Dis, in the target scene) make sense since the two stimuli are the “same” (i.e., cats) in both scenes. This indicates that participants check the status of these two occurrences of the same stimulus in both scenes. The unexpected number of AT and BC transitions seems to suggest that participants also check both the relation B with respect to C and A with respect to T, presumably to be sure that these relations are not part of the solution. Complexity played a role, with the increased number of AB and AT&BC transitions for Complex problems. This suggests that establishing the role of B took required more saccades than in Simple problems and, in addition, required checking irrelevant pairings, such as AT, BC, and BDis. To summarize, complexity did not produce more alignments, in particular, AC and BT, but did require more comparisons in order for participants to establish the role of the designated stimulus (B), and to check irrelevant associations, such as AT and BC, while discarding Dis as an option over the course of the entire trial. This confirms the results of Experiment 1 in which distractors were more focused on when Complex problems were being solved. The SVM analyses confirm that transitions involving Dis played an important role in differentiating the two conditions.

10.1. General discussion

The overarching goal of the present paper is to better understand the impact of different analogy formats and complexity on analogy problem‐solving strategies and find the gaze signatures characterizing each condition. We showed, using techniques from machine learning, that certain combinations of transitions, which define the search strategies employed by participants, were highly predictable of the type and difficulty of the analogies being solved.

Most of the available research using eye‐tracking to study analogy‐making has dealt with well‐identified dimensions of analogy problems, such as those specified in matrices (Bethell‐Fox et al., 1984 ; Chen et al., 2016 ; Hayes et al., 2011 ), and were restricted to only a single test format, such as scene analogies (Gordon & Moser, 2007 ) or proportional analogies (French et al., 2017 ; Starr et al., 2018 ; Thibaut & French, 2016 ; Thibaut et al., 2011 ). These previous studies did not manipulate for difficulty level, or did not systematically study the temporal dynamics of solution reaching. such as constructive matching (Bethell‐Fox et al., 1984 ), in which participants first generate relations found in the base domain and, subsequently, apply them to the target domain. For example, Starr et al. ( 2018 ) translated each of the two hypothetical strategies into an algorithm combining early gazes and transitions which was then applied to each trial and led to its categorization as supporting one strategy type or the other. As the algorithms aggregated gaze length and positions, and transitions in a single measure, it was not meant to capture the temporal dynamics we were looking for here. Hayes et al. ( 2011 ) and Hayes and Petrov ( 2016 ) elaborated metrics to capture strategy regularities in Raven Progressive Matrices (Raven & Court, 1998 ). In Hayes et al. ( 2011 ), the authors measured participants’ within the matrix and between the matrix and options transitions (see Bethell‐Fox et al., 1984 ) and the relative weight of these two strategies. However, they did not look at the temporal dynamics of their integration. Hayes and Petrov ( 2016 ) introduced a method combining verbal protocols and pupillary responses, in an attempt to disentangle what they called exploration (new hypotheses) and exploitation (pattern descriptions) during solving. Their main point was to study “explore” and “exploit” search behaviors through their pupil diameter correlates and their distribution as a function of problem difficulty and time in a trial. Their approach, however, targeted these two broad categories (explore and exploit) and was not meant to study how participants distributed their gazes on stimuli as a function of their status (e.g., distractor).

Compared to these prior contributions, our paper raised five main issues––namely:

  • the generality of search patterns across analogy formats and across levels of difficulty;
  • the role of alignment across formats and difficulty levels;
  • the unanticipated focus on distractors over the course of a trial and as a function of difficulty level;
  • the time course of the rejection of the distractors; and finally,
  • the predictability of subsets of transitions and AOIs.

Our most general result was that all analogy formats and difficulty levels elicit similar global search patterns, largely characterized by a projection‐first approach. Increasing difficulty led to more gazes within the source domain––A and B, and of AB transitions. Another general result was that AC and BT transitions were essentially absent in all conditions or played no role, in conjunction, in differentiating the conditions. We initially hypothesized that Scene analogies and more complex conditions might elicit these alignments, but this turned out not to be the case. Another significant result was the discovery, throughout trials, that there is a systematic examination of the distractors, both those related and unrelated to the A item (Experiment 1). As far as we know, this examination throughout the entire trial of distractors is not predicted by any current theory of analogy‐making. Finally, explorations involving the distractors (AOIs and transitions with SemDis and UnDis, or BDis in the second experiment) could actually increase at the end of the trials. This result is also not predicted by the models of analogical reasoning we are aware of that see analogy solving as a progressive convergence toward the correct solution, which would imply a decrease in the examination of distractor items at the end of the trial.

On the other hand, participants did adapt their search strategies to different analogy formats, semantic distances between items, and the number of relations involved. Overall, Scene analogies gave rise to more gazes toward B than to A. Scene and Proportional analogies differed in that the target was examined earlier in Scene analogies, whereas Proportional analogy problems had more CT transitions than Scene analogy problems. The increased level of exploration of distractors (UnDis in Experiment 1 and Dis in Experiment 2) at the end of the trials was more pronounced in Complex conditions.

The examination of distractors, whether semantically related to C or not, throughout the trial is one of the main findings of the present research. Most models predict a progressive convergence toward an analogical match as the trial proceeds, with local, perceptual, irrelevant matches progressively discarded, and not semantically related items to C being the first to go. Experiment 1 revealed that, by the end of Complex problems, participants were increasingly testing both semantically related and unrelated distractors, suggesting that participants continued to test these solutions in parallel to the correct solution. Sometimes, the number of transitions to distractor items in Complex problems even exceeded the number of C‐to‐Target relations (CT).

How is this result to be interpreted? Our search‐space hypothesis predicts that, when the relational solution is less salient, participants will tend to test unrelated distractors more systematically, whether or not they are relationally connected to C by a low saliency relation between A and B. This takes more time since an infinite number of descriptors is potentially available for each item (Goldstone, 1994b ; Goodman, 1972 ; Murphy & Medin, 1985 ).

In addition, paradoxically, it can be easier to establish that a semantically related distractor is not the relational (i.e., analogical) solution because, once a relation between the distractor and C is identified, it is easy to check whether this relation holds in the base domain. Similarly, Experiment 2 showed that participants continued to compare B in the upper scene (the cat chasing the mouse) with Dis in the lower scene (the cat in the foreground, the semantic distractor) throughout the entire trial, rather than only in the early stages of solving the analogy (see also Gordon & Moser, 2007 , for results and a discussion of the precedence of perceptual matches over relational matches). This suggests that there is no deactivation of the distractor over the course of the trial, which means that participants continue to transition to it or gaze at it. This casts doubt on the standard view of convergence to a solution in which the items that are irrelevant to a solution are gradually discarded. At a more methodological level, our data suggest that segmenting trials into a number of time slices is an appropriate approach to study the evolution over time of solving an analogy problem.

Our machine learning approach provides a new tool aiming at better characterizing the underlying dynamics of analogy‐making , which was developed and described in French et al. ( 2017 ). We looked for the smallest subset of transitions that could accurately predict as early in the trial as possible the type of problem being solved (Complex vs. Simple, Correct vs. Errors). Perhaps the most important contribution of the SVM+LOOCV technique is that these techniques show the extent to which, very early on in a trial, one can actually predict well above chance the difficulty of a problem based on the number and type of transitions observed, this corresponding to the search strategy adopted by the participant.

10.2. Modeling analogical reasoning

As already argued by Thibaut and French ( 2016 ), we believe that our data impose certain important constraints on models of analogy‐making, not only in terms of what participants actually examine when solving a problem, but also when they examine various items and relations. Our experiments made no attempt to evaluate formal modeling approaches to analogical reasoning but, rather, examined some of the behavioral predictions derived from these models. The strengths and weaknesses of these models have been the focus of other papers (e.g., French, 2002 ; Gentner & Forbus, 2011 , for reviews of computational modeling efforts).

10.3. What our data show

10.3.1 relational‐priming models.

These models predict essentially none of the back‐and‐forth dynamics that we observed empirically as participants solve standard types of analogy problems (Leech et al., 2008 ). The underlying assumption is that once the relation between A and B is perceived, no further exploration is required and a solution to the problem is quasi‐immediate. Our data clearly do not support this view of analogical reasoning for which the AB relation would prime the CTarget relation, once it has been discovered. Instead, we observed extensive evidence of systematic comparisons between C, the Target, and the distractors in both Experiments, including ongoing comparisons involving the unrelated distractors in Experiment 1. In the latter case, transitions between C and the Unrelated Distractors were among the most numerous transitions and, moreover, occurred throughout the trial, something that is certainly not predicted by an automatic‐priming view that predicts a rapid selection of the relational solution, but no systematic gazes and comparisons with distractors at the end of the trial.

10.3.2 Projection‐first models

Our results are largely consistent with the idea of an exploration of the source domain for relations, which are then generalized to the target domain. This is consistent with a model, such as LISA (Hummel & Holyoak, 1997 ) in which mappings are conceived as guided pattern‐matching. For example, in the cat‐mouse pair, pointing to the cat will activate the proposition “ chase (mouse, cat).” This should be followed by CT saccades corresponding to “ chase (girl, boy),” since the relation chase exists in both domains. This approach predicts that the higher the activation of a relation, the lower the activation of other relations, which, over time, predicts less activation of items that are irrelevant to the solution of the analogy problem. It is, however, not clear how this model would account for the high level of transitions involving semantic distractors and, especially, transitions involving unrelated distractors throughout the trial, and for the TSemDis transitions at the end of the trials in Experiment 1.

10.3.3 Alignment‐first models

These models are based on the alignment of entities that play equivalent roles. The basic idea is that participants start looking for mappings of various sorts, including perceptual mappings, which, at least initially, consist of many local mappings. The purpose is to discover mappings that transfer global relational meaning between the source and target domains. However, one of the main claims of the model is that there will be between‐domain alignments (notably, AC and BT). However, for the two types of analogy formats presented in this paper, for each of the levels of difficulty of problems, and for each of the time slices within problems, we found no evidence of these cross‐domain alignments. Further, there were numerous alignments not predicted by this model, in particular, transitions between AT and BC.

10.3.4. Parallel‐terraced scan models

These models make no claim as to alignment‐first or projection‐first strategies, being rather a fluid combination of both approaches, based on activations in an associated semantic network. However, the lack of AC and BT transitions that we have shown empirically is problematic for this model, as is the initial overfocusing in children on relations centered around C (French et al., 2017 ; Thibaut & French, 2016 ). Also, they do not predict transitions to the distractors, especially the fact that these transitions tend to increase at the end of trials involving Complex trials.

10.4. Adapting models to the current results

Our paper offers a combination of three related results––namely, the generality of strategies across analogy‐problem formats, the importance of search‐strategy changes that depend on the intrinsic difficulty of the task, and the continued transitioning to distractors, including unrelated ones, throughout the trial.

Most approaches posit a type of cognitive‐resource sharing, whereby the higher the activation of a particular relation, the lower the activation of other relations, which, over time, predicts less activation of items that are irrelevant to the solution of the analogy problem. Thus, one challenge for these models is to account for the high level of transitions involving semantic distractors in both experiments and, to an even larger extent, with unrelated distractors throughout the trial. These results would seem to suggest that the analogical choice is paralleled by a continued search for other, potentially less obvious solutions when the correct answer has a low level of activation. Continued saccading to unrelated distractors that have no clear semantic relation with C is problematic for most models.

Further, models of analogy need to move away from a clear separation of data representation and the processing of those representations, in the sense that processing (i.e., comparisons of pairs of stimuli) potentially leads to rerepresentation of the original data (i.e., possible relations between items in a pair). In the Complex problems presented here, we have seen how participants adapt their representations as they are solving the problem . A rerepresentation can also be conceived as dynamically changing built‐in representations or as constructing novel representations on the fly. It is difficult to imagine that all possible relations necessary to solve any particular analogy problem could be built into the system a priori. These relations must be discovered on the fly and cannot reasonably be anticipated a priori (e.g., French, 1995 ; Hofstadter & Sander, 2013 ; Kokinov, Bliznashki, Kosev, & Hirstova, 2007 ; Schyns, Goldstone, & Thibaut, 1998 ). The issue we raise here is certainly not new. The possibility of preencoded versus dynamically encoded representations is a longstanding and difficult issue originating in early works on semantic memory (e.g., Smith, 1978 ).

10.5. Limitations of the present contribution

One limitation of the present work is its exhaustiveness. We have concentrated on types of analogy problems that are widely used in the research literature on analogy‐making and for which the solution is to be found among a small set of options. Other types of problems are, of course, available. For example, one might argue that an analogy verification task, in which valid or invalid analogies have to be verified (e.g., “Is dog:bone::bird:seeds valid?”), would be a better case for alignments.

In conclusion, our hope is this paper will ultimately lead to appropriate adjustments to the current models of analogy‐making that we believe have difficulty accounting for the results we have presented.

Open Research Badges

This article has earned Open Data and Open Materials badges. Data are available at https://osf.io/kp5j9/ and materials are available at https://osf.io/kp5j9/ .

Acknowledgments

This research was supported in part by French ANR grants 10‐BLAN‐1908‐01, ANAFONEX, and ANR‐18‐CE28‐0019‐01 COMPARE to Jean‐Pierre Thibaut, a joint ANR‐ESRC grant ORA 10–056 GETPIMA to Robert French, FABER grants from the Conseil Regional de Bourgogne to Jean‐Pierre Thibaut, and a BFC regional council PARI doctoral grant to Yannick Glady. We would also like to thank Patrick Bard for preparing one experiment and extracting the eye‐tracking data from the raw data, and Marion Mallot for stimulus construction and data collection.

Appendix A. Experiment 1. AOIs––confidence intervals

Simple Complex

Mean–95%95%Mean–95%95%
Slice 1A&B22.2020.2524.14Slice 1A&B15.1613.1817.15
Slice 1C&T6.535.817.26Slice 1C&T6.415.777.04
Slice 1SemDis2.521.523.51Slice 1SemDis6.375.257.48
Slice 1UnDis2.091.192.99Slice 1UnDis5.404.286.51
Slice 2A&B3.852.814.89Slice 2A&B7.616.418.82
Slice 2C&T8.287.389.19Slice 2C&T7.146.487.80
Slice 2SemDis10.889.7112.05Slice 2SemDis10.099.2210.96
Slice 2UnDis10.519.5411.48Slice 2UnDis8.497.549.43
Slice 3A&B4.763.116.42Slice 3A&B6.635.457.82
Slice 3C&T13.8212.5815.06Slice 3C&T10.979.9611.99
Slice 3SemDis7.596.228.95Slice 3SemDis8.577.349.81
Slice 3UnDis7.165.828.50Slice 3UnDis7.165.698.62

Appendix A. Experiment 1. Transitions––confidence intervals

Mean–95%95%Mean–95%95%
Slice 1AB1.141.091.18AB1.241.141.34
Slice 1AC&BT0.04–0.010.09AC&BT0.070.020.11
Slice 1CT0.170.100.25CT0.460.360.56
Slice 1C&T_SemDis0.140.070.21C&T_SemDis0.310.220.39
Slice 1C&T&SemDis_Undis0.200.120.28C&T&SemDis_Undis0.590.470.72
Slice 2AB0.480.350.61AB0.970.831.11
Slice 2AC&BT0.090.030.14AC&BT0.140.090.20
Slice 2CT0.470.410.52CT0.610.550.68
Slice 2C&T_SemDis0.280.220.34C&T_SemDis0.480.380.57
Slice 2C&T&SemDis_Undis0.800.750.85C&T&SemDis_Undis0.790.710.88
Slice 3AB0.510.360.65AB0.840.710.97
Slice 3AC&BT0.220.160.28AC&BT0.300.230.37
Slice 3CT0.430.370.50CT0.590.510.66
Slice 3C&T_SemDis0.380.260.50C&T_SemDis0.660.540.78
Slice 3C&T&SemDis_Undis0.690.630.74C&T&SemDis_Undis0.800.690.92

Appendix A. Experiment 2. AOIs––confidence intervals

Significant interaction involving complexity but no difference between simple and complex conditions.

Appendix A. Experiment 2. Transitions––confidence intervals

SimpleComplex
TransitionsMean–95%95%TransitionsMean–95%95%
AB0.220.110.33AB0.360.210.50
CT0.270.200.34CT0.300.200.39
AC‐BT0.050.010.09AC‐BT0.040.010.08
AT‐BC0.160.070.24AT‐BC0.300.180.41
Bdis0.250.140.35Bdis0.320.220.42
CDis‐TDis0.040.010.06CDis‐TDis0.000.000.00
  • Arlot, S. , & Celisse, A. (2010). A survey of cross‐validation procedures for model selection . Statistics Surveys , 4 , 40–79. [ Google Scholar ]
  • Bendetowicz, D. , Urbanski, M. , Aichelburg, C. , Levy, R. , & Volle, E. (2017). Brain morphometry predicts individual creative potential and the ability to combine remote ideas . Cortex , 86 , 216–229. [ PubMed ] [ Google Scholar ]
  • Bethell‐Fox, C. E. , Lohman, D. F. , & Snow, R. E. (1984). Adaptive reasoning: Componential and eye movement analysis of geometric analogy performance . Intelligence , 8 ( 3 ), 205–238. [ Google Scholar ]
  • Bugaiska, A. , & Thibaut, J.‐P. (2015). Analogical reasoning and aging: The processing speed and inhibition hypothesis . Aging, Neuropsychology, and Cognition , 223 , 340–356. [ PubMed ] [ Google Scholar ]
  • Chen, Z. , Honomichl, R. , Kennedy, D. , & Tan, E. (2016). Aiming to complete the matrix: Eye‐movement analysis of processing strategies in children's relational thinking . Developmental Psychology , 526 , 867–878. [ PubMed ] [ Google Scholar ]
  • Collins, A. M. , & Loftus, E. F. (1975). A spreading‐activation theory of semantic processing . Psychological Review , 826 , 407. [ Google Scholar ]
  • Doumas, L. A. , Hummel, J. E. , & Sandhofer, C. M. (2008). A theory of the discovery and predication of relational concepts . Psychological Review , 1151 , 1. [ PubMed ] [ Google Scholar ]
  • Duchowski, A. (2007). Eye tracking techniques. Theory and practice (2nd ed.). Springer. [ Google Scholar ]
  • Falkenhainer, B. , Forbus, K. D. , & Gentner, D. (1989). The structure‐mapping engine: Algorithm and examples . Artificial Intelligence , 411 , 1–63. [ Google Scholar ]
  • French, R. M. (1995). The subtlety of sameness: A theory and computer model of analogy‐making . MIT Press. [ Google Scholar ]
  • French, R. M. (2002). The computational modeling of analogy‐making . Trends in Cognitive Sciences , 65 , 200–205. [ PubMed ] [ Google Scholar ]
  • French, R. M. , Glady, Y. , & Thibaut, J. P. (2017). An evaluation of scanpath‐comparison and machine‐learning classification algorithms used to study the dynamics of analogy making . Behavior Research Methods , 494 , 1291–1302. [ PubMed ] [ Google Scholar ]
  • Geisser, S. (1975). The predictive sample reuse method with applications . Journal of the American Statistical Association , 70350 , 320–328. [ Google Scholar ]
  • Gentner, D. (1983). Structure‐mapping: A theoretical framework for analogy . Cognitive Science , 72 , 155–170. [ Google Scholar ]
  • Gentner, D. , & Forbus, K. D. (2011). Computational models of analogy . Wiley Interdisciplinary Reviews: Cognitive Science , 23 , 266–276. [ PubMed ] [ Google Scholar ]
  • Gentner, D. , Holyoak, K. J. , & Kokinov, B. (2001). The analogical mind: Perspectives from cognitive science . MIT Press. [ Google Scholar ]
  • Gentner, D. , & Toupin, C. (1986). Systematicity and surface similarity in the development of analogy . Cognitive Science , 103 , 277–300. [ Google Scholar ]
  • Gick, & Holyoak, K. J. (1980). Analogical problem solving . Cognitive Psychology , 12 , 306–355. [ Google Scholar ]
  • Glady , Y. , French, R. M. , & Thibaut, J. P. (2017). Children's failure in analogical reasoning tasks: A problem of focus of attention and information integration? Frontiers in Psychology , 8 , 707. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Goldstone, R. L. (1994a). Similarity, interactive activation, and mapping . Journal of Experimental Psychology: Learning, Memory, and Cognition , 201 , 3. [ Google Scholar ]
  • Goldstone, R. L. (1994b). The role of similarity in categorization: Providing a groundwork . Cognition , 522 , 125–157. [ PubMed ] [ Google Scholar ]
  • Goodman, N. (1972). Seven strictures on similarity. In Problems and projects . Bobs‐Merril. [ Google Scholar ]
  • Gordon, P. C. , & Moser, S. (2007). Insight into analogies: Evidence from eye movements . Visual Cognition , 151 , 20–35. [ Google Scholar ]
  • Green, A. E. (2016). Creativity, within reason: Semantic distance and dynamic state creativity in relational thinking and reasoning . Current Directions in Psychological Science , 251 , 28–35. [ Google Scholar ]
  • Green, A. E. , Kraemer, D. J. M. , Fugelsang, J. A. , Gray, J. R. , & Dunbar, K. (2010). Connecting long distance: Semantic distance in analogical reasoning modulates frontopolar cortex activity . Cerebral Cortex , 201 , 70–76. [ PubMed ] [ Google Scholar ]
  • Green, A. E. , Kraemer, D. J. , Fugelsang, J. A. , Gray, J. R. , & Dunbar, K. (2012). Neural correlates of creativity in analogical reasoning . Journal of Experimental Psychology: Learning, Memory, and Cognition , 38 (2), 264. [ PubMed ] [ Google Scholar ]
  • Hayes, T. , Petrov, A. , & Sederberg, P. B. (2011). A novel method for analyzing sequential eye movements reveals strategic influence on Raven's Advanced Progressive Matrices . Journal of Vision , 11 ( 10 ),1–11. [ PubMed ] [ Google Scholar ]
  • Hayes, T. R. , & Petrov, A. A. (2016). Pupil diameter tracks the exploration–exploitation trade‐off during analogical reasoning and explains individual differences in fluid intelligence . Journal of Cognitive Neuroscience , 282 , 308–318. [ PubMed ] [ Google Scholar ]
  • Hobeika, L. , Diard‐Detoeuf, C. , Garcin, B. , Levy, R. , & Volle, E. (2016). General and specialized brain correlates for analogical reasoning: A meta‐analysis of functional imaging studies: Meta‐analysis of analogy brain networks . Human Brain Mapping , 375 , 1953–1969. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hofstadter, D. R. , & Sander, E. (2013). Surfaces and essences: Analogy as the fuel and fire of thinking . Basic Books. [ Google Scholar ]
  • Holyoak, K. J. (2012). Analogy and relational reasoning . Oxford University Press. [ Google Scholar ]
  • Hummel, J. E. , & Holyoak, K. J. (1997). Distributed representations of structure: A theory of analogical access and mapping . Psychological Review , 1043 , 427–466. [ Google Scholar ]
  • Keane, M. (1987). On retrieving analogues when solving problems . Quarterly Journal of Experimental Psychology , 391 , 29–41. [ Google Scholar ]
  • Kmiecik, M. J. , Brisson, R. J. , & Morrison, R. G. (2019). The time course of semantic and relational processing during verbal analogical reasoning . Brain and Cognition , 129 , 25–34. [ PubMed ] [ Google Scholar ]
  • Kokinov, B. , Bliznashki, S. , Kosev, S. , & Hirstova, P. (2007). Analogical mapping and perception: Can mapping cause a re‐representation of the target stimulus? Proceedings of the Annual Meeting of the Cognitive Science Society .
  • Krawczyk, D. (2017). Reasoning: The neuroscience of how we think . Academic Press. [ Google Scholar ]
  • Leech, R. , Mareschal, D. , & Cooper, R. P. (2008). Analogy as relational priming: A developmental and computational perspective on the origins of a complex cognitive skill . Behavioral and Brain Sciences , 31 ( 04 ), 357–378. [ PubMed ] [ Google Scholar ]
  • Le Meur, O. , & Baccino, T. (2013). Methods for comparing scanpaths and saliency maps: Strengths and weaknesses . Behavior Research Methods , 451 , 251–266. [ PubMed ] [ Google Scholar ]
  • Markman, A. B. , & Gentner, D. (1993). Structural alignment during similarity comparisons . Cognitive Psychology , 254 , 431–467. [ Google Scholar ]
  • Miller, R. G. (1974). The jackknife–A review . Biometrika , 61 ( 1 ), 1–15. [ Google Scholar ]
  • Mitchell, M. (1993). Analogy‐making as perception: A computer model . MIT Press. [ Google Scholar ]
  • Mulholland, T. M. , Pellegrino, J. W. , & Glaser, R. (1980). Components of geometric analogy solution . Cognitive Psychology , 122 , 252–284. [ PubMed ] [ Google Scholar ]
  • Murphy, G. L. (2002). The big book of concepts . MIT Press. [ Google Scholar ]
  • Murphy, G. L. , & Medin, D. L. (1985). The role of theories in conceptual coherence . Psychological Review , 923 , 289. [ PubMed ] [ Google Scholar ]
  • Rattermann, M. J. , & Gentner, D. (1998). More evidence for a relational shift in the development of analogy: Children's performance on a causal‐mapping task . Cognitive Development , 134 , 453–478. [ Google Scholar ]
  • Raven, J. C. , & Court, J. H. (1998). Raven's progressive matrices and vocabulary scales . Oxford Pyschologists Press. [ Google Scholar ]
  • Rayner, K. (2012). Eye movements and visual cognition: Scene perception and reading . Springer Science & Business Media. [ Google Scholar ]
  • Richland, L. E. , Morrison, R. G. , & Holyoak, K. J. (2006). Children's development of analogical reasoning: Insights from scene analogy problems . Journal of Experimental Child Psychology , 943 , 249–273. [ PubMed ] [ Google Scholar ]
  • Schyns, P. G. , Goldstone, R. L. , & Thibaut, J. P. (1998). The development of features in object concepts . Behavioral and Brain Sciences , 211 , 1–17. [ PubMed ] [ Google Scholar ]
  • Smith, E. E. (1978). Theories of semantic memory. Handbook of learning and cognitive processes . [ Google Scholar ]
  • Starr, A. , Vendetti, M. S. , & Bunge, S. A. (2018). Eye movements provide insight into individual differences in children's analogical reasoning strategies . Acta Psychologica , 186 , 18–26. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sternberg, R. J. (1977). Component processes in analogical reasoning . Psychological Review , 844 , 353–378. [ Google Scholar ]
  • Stevenson, C. E. , Heiser, W. J. , & Resing, W. C. (2013). Working memory as a moderator of training and transfer of analogical reasoning in children . Contemporary Educational Psychology , 383 , 159–169. [ Google Scholar ]
  • Steyvers, M. , & Tenenbaum, J. B. (2005). The large‐scale structure of semantic networks: Statistical analyses and a model of semantic growth . Cognitive Science , 291 , 41–78. [ PubMed ] [ Google Scholar ]
  • Stone, M. (1974). Cross‐validatory choice and assessment of statistical predictions . Journal of the Royal Statistical Society: Series B (Methodological) , 362 , 111–133. [ Google Scholar ]
  • Thibaut, J. P. , French, R. M. , Missault, A. , Gérard, Y. , & Glady, Y. (2011). In the eyes of the beholder: What eye‐tracking reveals about analogy‐making strategies in children and adults . Proceedings of the Thirty‐ Third Annual Meeting of the Cognitive Science Society , 33 , 453–458. [ Google Scholar ]
  • Thibaut, J. P. , French, R. M. , & Vezneva, M. (2010a). The development of analogy making in children: Cognitive load and executive functions . Journal of Experimental Child Psychology , 1061 , 1–19. [ PubMed ] [ Google Scholar ]
  • Thibaut, J. P. , French, R. M. , & Vezneva, M. (2010b). Cognitive load and semantic analogies: Searching semantic space . Psychonomic Bulletin & Review , 174 , 569–574. [ PubMed ] [ Google Scholar ]
  • Thibaut, J. P. , & French, R. M. (2016). Analogical reasoning, control and executive functions: A developmental investigation with eye‐tracking . Cognitive Development , 38 , 10–26. [ Google Scholar ]
  • Vapnik, V. (1999). The nature of statistical learning theory . Springer Science & Business Media. [ Google Scholar ]
  • Vendetti, M. , Knowlton, B. J. , & Holyoak, K. J. (2012). The impact of semantic distance and induced stress on analogical reasoning: A neurocomputational account . Cognitive, Affective & Behavioral Neuroscience, 12, 80–4–812 . [ PubMed ] [ Google Scholar ]
  • Vendetti, M. S. , Starr, A. , Johnson, E. L. , Modavi, K. , & Bunge, S. A. (2017). Eye movements reveal optimal strategies for analogical reasoning . Frontiers in Psychology , 8 , 932. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Vendetti, M. S. , Starr, A. , Johnson, E. L. , Modavi, K. , & Bunge, S. A. (2017). Eye movements reveal optimal strategies for analogical reasoning . Frontiers in Psychology , 8 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Vendetti, M. S. , Wu, A. , & Holyoak, K. J. (2014). Far‐out thinking: Generating solutions to distant analogies promotes relational thinking . Psychological Science , 254 , 928–933. [ PubMed ] [ Google Scholar ]

PRODILOGY/ANALOGY: Analogical reasoning in general problem solving

  • Invited Papers
  • Conference paper
  • First Online: 01 January 2005
  • Cite this conference paper

analogical problem solving definition

  • Manuela M. Veloso 1  

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 837))

Included in the following conference series:

  • European Workshop on Case-Based Reasoning

185 Accesses

10 Citations

This paper describes the integration of analogical reasoning into general problem solving as a method of learning at the strategy level to solve problems more effectively. The method based on derivational analogy has been fully implemented in prodigy/analogy and proven empirically to be amenable to scaling up both in terms of domain and problem complexity. prodigy/analogy addresses a set of challenging problems, namely: how to accumulate episodic problem solving experience, cases, how to define and decide when two problem solving situations are similar, how to organize a large library of planning cases so that it may be efficiently retrieved, and finally how to successfully transfer chains of problem solving decisions from past experience to new problem solving situations when only a partial match exists among corresponding problems. The paper discusses the generation and replay of the problem solving cases and we illustrate the algorithms with examples. We present briefly the library organization and the retrieval strategy. We relate this work with other alternative strategy learning methods, and also with plan reuse. prodigy/analogy casts the strategy-level learning process for the first time as the automation of the complete cycle of constructing, storing, retrieving, and flexibly reusing problem solving experience. We demonstrate the effectiveness of the analogical replay strategy by providing empirical results on the performance of prodigy/analogy , accumulating and reusing a large case library in a complex problem solving domain. The integrated learning system reduces the problem solving search effort incrementally as more episodic experience is compiled into the library of accumulated learned knowledge.

Special thanks to Jaime Carbonell for his guidance, suggestions, and discussions on this work. A reduced version of this paper was published in the Proceedings of the Twelfth National Conference on Artificial Intelligence, 1994. This research is sponsored by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Wright Laboratory or the U.S. Government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Similar content being viewed by others

analogical problem solving definition

Case-Based Reasoning

analogical problem solving definition

R. Bareiss and J. A. King. Similarity assessment in casebased reasoning. In Proceedings of the Second Workshop on Case-Based Reasoning , pages 67–71, Pensacola, FL, May 1989. Morgan Kaufmann.

Google Scholar  

Ralph Barletta and William Mark. Explanation-based indexing of cases. In Proceedings of the First Workshop on Case-Based Reasoning , pages 50–60, Tampa, FL, May 1988. Morgan Kaufmann.

Sanjay Bhansali and Mehdi T. Harandi. Synthesis of UNIX programs using derivational analogy. Machine Learning , 10, 1993.

Brad Blumenthal. Replaying episodes of a metaphoric application interface designer . PhD thesis, University of Texas, Artificial Intelligence Lab, Austin, December 1990.

T. Cain, M. Pazzani, and G. Silverstein. Using domain knowledge to influence similarity judgments. In Proceedings of the 1991 DARPA Workshop on Case-Based Reasoning , pages 191–199. Morgan Kaufmann, May 1991.

Jaime G. Carbonell, Jim Blythe, Oren Etzioni, Yolanda Gil, Robert Joseph, Dan Kahn, Craig Knoblock, Steven Minton, Alicia Pérez, Scott Reilly, Manuela Veloso, and Xuemei Wang. PRODIGY4.0: The manual and tutorial. Technical Report CMU-CS-92-150, SCS, Carnegie Mellon University, June 1992.

Jaime G. Carbonell. Derivational analogy: A theory of reconstructive problem solving and expertise acquisition. In R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, editors, Machine Learning, An Artificial Intelligence Approach, Volume II , pages 371–392. Morgan Kaufman, 1986.

Robert B. Doorenbos and Manuela M. Veloso. Knowledge organization and the utility problem. In Proceedings of the Third International Workshop on Knowledge Compilation and Speedup Learning , pages 28–34, Amherst, MA, June 1993.

Oren Etzioni. Acquiring search-control knowledge via static analysis. Artificial Intelligence , 65, 1993.

Eugene Fink and Manuela Veloso. Formalizing the PRODIGY planning algorithm. Technical Report CMU-CS-94-123, School of Computer Science, Carnegie Mellon University, 1994.

Dedre Gentner. The mechanisms of analogical learning. In S. Vosniadou and A. Ortony, editors, Similarity and Analogical Reasoning . Cambridge University Press New York, NY, 1987.

Angela K. Hickman and Jill H. Larkin. Internal analogy: A model of transfer within problems. In The 12th Annual Conference of The Cognitive Science Society , pages 53–60, Hillsdale, NJ, 1990. Lawrence Erlbaum Associates.

Subbarao Kambhampati and James A. Hendler. A validation based theory of plan modification and reuse. Artificial Intelligence , 55(2–3):193–258, 1992.

Subbarao Kambhampati and Smadar Kedar. Explanation based generalization of partially ordered plans. In Proceedings of AAAI-91 , pages 679–685, 1991.

Janet Kolodner Judging which is the “best” case for a case-based reasoner. In Proceedings of the Second Workshop on Case-Based Reasoning , pages 77–81. Morgan Kaufmann, May 1989.

John E. Laird, Paul S. Rosenbloom, and Allen Newell. Chunking in SOAR: The anatomy of a general learning mechanism. Machine Learning , 1:11–46, 1986.

Steven Minton. Learning Effective Search Control Knowledge: An Explanation-Based Approach . Kluwer Academic Publishers, Boston, MA, 1988.

Tom M. Mitchell, Richard M. Keller, and Smadar T. Kedar-Cabelli. Explanation-based generalization: A unifying view. Machine Learning , 1:47–80, 1986.

Jack Mostow. Automated replay of design plans: Some issues in derivational analogy. Artificial Intelligence , 40(1–3), 1989.

Juergen Paulokat and Stefan Wess. Planning for machining workpieces with a partial-order, nonlinear planner. In Working notes of the AAAI Fall Symposium on Planning and Learning: On to Real Applications , November 1994.

M. Pazzani. Creating a Memory of Causal Relationships: An integration of empirical and explanation-based learning methods . Lawrence Erlbaum Associates, Hillsdale, NJ, 1990.

B. Porter, R. Bareiss, and R. Holte. Knowledge acquisition and heuristic classification in weak-theory domains. Technical Report AI-TR-88-96, Department of Computer Science, University of Texas at Austin, 1989.

Stuart J. Russell. Analogical and Inductive Reasoning . PhD thesis, Stanford University, 1986.

Peter Stone, Manuela Veloso, and Jim Blythe. The need for different domain-independent heuristics. In Proceedings of the Second International Conference on AI Planning Systems , June 1994.

Manuela M. Veloso and Jaime G. Carbonell. Integrating analogy into a general problem-solving architecture. In Maria Zemankova and Zbigniew Ras, editors, Intelligent Systems , pages 29–51. Ellis Horwood, Chichester, England, 1990.

Manuela M. Veloso and Jaime G. Carbonell. Derivational analogy in Prodigy : Automating case acquisition, storage, and utilization. Machine Learning , 10:249–278, 1993.

Manuela M. Veloso and Jaime G. Carbonell. Towards scaling up machine learning: A case study with derivational analogy in prodigy . In S. Minton, editor, Machine Learning Methods for Planning , pages 233–272. Morgan Kaufmann, 1993.

Manuela M. Veloso, M. Alicia Pérez, and Jaime G. Carbonell. Nonlinear planning with parallel resource allocation. In Proceedings of the DARPA Workshop on Innovative Approaches to Planning, Scheduling, and Control , pages 207–212, San Diego, CA, November 1990. Morgan Kaufmann.

Manuela M. Veloso. Nonlinear problem solving using intelligent casualcommitment. Technical Report CMU-CS-89-210, School of Computer Science, Carnegie Mellon University, 1989.

Manuela M. Veloso. Learning by Analogical Reasoning in General Problem Solving . PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, August 1992. Available as technical report CMU-CS-92-174. A revised version of this manuscript will be published by Springer Verlag, 1994.

R. Waldinger. Achieving several goals simultaneously. In N. J. Nilsson and B. Webber, editors, Readings in Artificial Intelligence , pages 250–271. Morgan Kaufman, Los Altos, CA, 1981.

Hua Yang and Douglas Fisher. Similarity-based retrieval and partial reuse of macro-operators. Technical Report CS-92-13, Department of Computer Science, Vanderbilt University, 1992.

Download references

Author information

Authors and affiliations.

School of Computer Science, Carnegie Mellon University, 15213-3891, Pittsburgh, PA

Manuela M. Veloso

You can also search for this author in PubMed   Google Scholar

Editor information

Rights and permissions.

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper.

Veloso, M.M. (1994). PRODILOGY/ANALOGY: Analogical reasoning in general problem solving. In: Wess, S., Althoff, KD., Richter, M.M. (eds) Topics in Case-Based Reasoning. EWCBR 1993. Lecture Notes in Computer Science, vol 837. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58330-0_75

Download citation

DOI : https://doi.org/10.1007/3-540-58330-0_75

Published : 02 June 2005

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-540-58330-1

Online ISBN : 978-3-540-48655-8

eBook Packages : Springer Book Archive

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. Analogical Reasoning as a Problem Solving Approach

    analogical problem solving definition

  2. PPT

    analogical problem solving definition

  3. PPT

    analogical problem solving definition

  4. Schematic depiction of the two-subphases model of analogical

    analogical problem solving definition

  5. Analogical Reasoning as a Problem Solving Approach

    analogical problem solving definition

  6. PPT

    analogical problem solving definition

VIDEO

  1. 20 PROBLEM SOLVING DEFINITION AND STEPS

  2. Analogical Transfer in Problem Solving

  3. Numerical Problem: Analog Communication PART-2

  4. Introduction to Algorithms, definition and importance. Efficiency and complexity

  5. Analogical Argument

  6. Mathematical Analogical problem for SSC, MTS, CGL EXAMS #mathtricks #cglatest #yourstudy

COMMENTS

  1. Analogical Problem Solving

    Definition. Analogical problem solving is a cognitive process where individuals use their knowledge of a familiar problem or situation to help solve a new, unfamiliar problem. It involves drawing connections and transferring relevant information from a known domain to a novel one in order to find a solution.

  2. Analogical Thinking: A Method For Solving Problems

    Whether he realised it or not, de Mestral used what today we term "analogical thinking" or analogical reasoning; the process of finding a solution to a problem by finding a similar problem with a known solution and applying that solution to the current situation. An analogy is a comparison between two objects, or systems of objects, that ...

  3. What Is Analogical Reasoning?

    Analogical reasoning involves identifying similarities between different situations to make inferences or solve problems. Both are forms of ampliative reasoning , which is defined by extrapolating insights from one context to another, drawing connections, and identifying similarities between disparate situations.

  4. Analogical reasoning

    Analogical reasoning is the cognitive process of drawing parallels between two different situations or concepts, allowing individuals to transfer knowledge and solve problems based on similarities. This type of reasoning is crucial for creativity, as it helps in generating new ideas by connecting existing knowledge in innovative ways. It also plays a role in forming mental models, where ...

  5. Analogical Reasoning

    Problem Solving: Deduction, Induction, and Analogical Reasoning. F. Klix, in International Encyclopedia of the Social & Behavioral Sciences, 2001. 6 Reasoning and Analogies. While deductive reasoning, in logic, refers to the necessary outcomes of a set of conditions, inductive reasoning is concerned with determining the likelihood of an outcome.

  6. PDF Analogical Reasoning, Psychology of

    the use of analogy in problem-solving and held that analogical mapping processes are oriented towards attaining goals (such as solutions to problems). According to pragmatic mapping theory, it is goal relevance that determines what is selected in ana-logy. Holyoak and Thagard (1989) later combined this pragmatic focus with structural factors in ...

  7. Analogical problem-solving

    Definition. Analogical problem-solving is a cognitive process that involves using the knowledge and solutions from a previous, similar situation to address a new problem. This method leverages the structural similarities between the two problems, allowing individuals to draw parallels and apply learned strategies from past experiences to ...

  8. 5

    The specific role of analogical thinking in creativity is the focus of Chapter 5. The chapter first reviews general aspects of analogical thinking, which leads to an examination of the mechanisms whereby analogical thinking plays a role in creativity. One such mechanism involves analogical transfer: the solution from a problem in memory is ...

  9. Analogical Reasoning

    Definition. Analogical Reasoning and Its Uses. ... Practically speaking, analogical thinking is the basis of much of problem solving in the sense that many of these problems are solved based on previous examples. This involves abstracting details from a particular set of problems, comparing and resolving structural similarities, and extracting ...

  10. The Development of Analogical Problem Solving

    More recently, researchers in three relatively insular disciplines have focused on analogical reasoning. First, cognitive scientists have proposed that analogy plays a principal role in the induction mechanisms of intelligent systems, both biological and electronic. Thus models and simulations have appeared with increasing frequency in that ...

  11. What is Analogical Reasoning?

    Analogical reasoning is using an analogy, a type of comparison between two things, to develop understanding and meaning. It's commonly used to make decisions, solve problems and communicate. As a tool of decision making and problem solving, analogy is used to simplify complex scenarios to something that can be more readily understood.

  12. Analogy and Analogical Reasoning

    An analogy is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar.Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further ...

  13. Analogical Reasoning: What Develops? A Review of Research and ...

    This definition allows analogical rea-soning to encompass problem solving (using the solution to a known problem to solve a structurally similar problem), relational map-ping (e.g., recognizing the relational similar-ity between the solar system and an atom [Gentner, 1983] and using one to understand the other), school procedures such as the use

  14. 8

    One such skill is analogical problem solving - the use of a solution to a known source problem to develop a solution to a novel target problem. At the most general level, analogical problem solving involves three steps, each of which raises difficult theoretical problems (Holyoak, 1984, 1985). The first step involves accessing a plausibly ...

  15. Analogical thinking

    Analogical thinking. Analogical thinking is what we do when we use information from one domain (the source or analogy) to help solve a problem in another domain (the target). Experts often use analogies during the process of problem solving, and analogies have been involved in numerous scientific discoveries. However, studies of novice problem ...

  16. 11

    Summary. When people encounter a novel problem, they might be reminded of a problem they solved previously, retrieve its solution, and use it, possibly with some adaptation, to solve the novel problem. This sequence of events, or "problem-solving transfer," has important cognitive benefits: It saves the effort needed for derivation of new ...

  17. The Value of Analogies to Problem-Solving

    In short, analogical thinking is when we use information, solutions, ideas from one domain (the source) to solve a problem in a new domain (the target). Analogical reasoning has been shown to be an effective way to solve difficult scientific problems (e.g. Kepler, Scientific Labs), but many people have difficulty applying the lessons from one ...

  18. PDF Analogical Problem Solving: Insights from Verbal Reports

    Analogical Problem Solving Analogiesare based on shared relations betweenbase and target problem (Gentner, 1983; Clement & Gentner, 1991). By highlighting shared relational structures, analogies connect domains and problems that may appear only marginally similar on the surface. This process involves

  19. Understanding the What and When of Analogical Reasoning Across Analogy

    In solving an analogy problem what information should be processed and when? Most research in the field has dealt with interpretations of analogies, their soundness, factors influencing their comprehension, with or without reaction‐time data (RTs). ... By definition, analogical reasoning involves multiple sources of information and various ...

  20. PDF Modeling Visual Problem-Solving as Analogical Reasoning Andrew Lovett

    Modeling Visual Problem-Solving as Analogical Reasoning Analogy is perhaps the cornerstone of human intelligence (Gentner, 2003, 2010; Penn, Holyoak, & Povinelli, 2008; Hofstadter & Sander, 2013). By comparing two domains and identifying commonalities in their structure, we can derive useful inferences and develop novel

  21. PRODILOGY/ANALOGY: Analogical reasoning in general problem solving

    Abstract. This paper describes the integration of analogical reasoning into general problem solving as a method of learning at the strategy level to solve problems more effectively. The method based on derivational analogy has been fully implemented in prodigy/analogy and proven empirically to be amenable to scaling up both in terms of domain ...

  22. Analogical problem solving

    In essence, both an analogy and a schema consist of an organized system of relations. Consequently, the framework for analogical problem solving presented here will draw its conceptual vocabulary from various schema-based models, as well as from Sternberg's (1977a, 1977b) model of component processes involved in analogical reasoning.