Grammar Beast

Prediction vs Hypothesis: In-Depth Comparison

prediction vs hypothesis

Considering discussing the difference between prediction and hypothesis, it’s important to understand the distinct meanings and implications of these two terms. Prediction and hypothesis are both used in scientific research and analysis, but they serve different purposes and have different characteristics.

In simple terms, a prediction is a statement or assertion about what will happen in the future based on existing knowledge or observations. It involves making an educated guess or forecast about an outcome or event that has not yet occurred. A prediction is often based on patterns, trends, or correlations identified in data or observations, and it aims to provide insight into what is likely to happen.

On the other hand, a hypothesis is a tentative explanation or proposition that is formulated to explain a specific phenomenon or observed data. It is a proposed explanation or theory that can be tested through scientific methods and experiments. A hypothesis is typically based on prior knowledge, observations, or existing theories, and it aims to provide a possible explanation for a particular phenomenon or set of observations.

While predictions focus on forecasting future events, hypotheses are concerned with explaining and understanding existing phenomena. Predictions can be validated or invalidated by subsequent events or data, whereas hypotheses can be supported or rejected through empirical testing and analysis. In the following sections, we will delve deeper into the characteristics, uses, and examples of predictions and hypotheses in various fields of study.

The Definitions

In the realm of scientific research and analysis, it is crucial to have a clear understanding of the terms “prediction” and “hypothesis.” These terms are often used interchangeably in everyday conversation, but they hold distinct meanings and play different roles in the scientific process. Let’s delve into the definitions of both prediction and hypothesis:

Define Prediction

A prediction, in the context of scientific inquiry, refers to a statement or assertion about a future event or outcome. It is a logical deduction or inference based on existing knowledge, observations, and patterns. Predictions are typically made with the intention of testing their accuracy and validity through empirical evidence.

Predictions are often formulated by researchers or scientists who aim to anticipate the results of an experiment, an observation, or a specific phenomenon. They are based on a comprehensive analysis of available data and previous research findings. Predictions can be either quantitative or qualitative, depending on the nature of the research question and the available information.

For instance, in the field of meteorology, a prediction might involve estimating the likelihood of rainfall in a particular region during a specific time frame. In this case, meteorologists use various atmospheric indicators, historical weather patterns, and mathematical models to make their predictions.

Predictions are essential in scientific research as they help guide experimental design, data collection, and analysis. They serve as a starting point for investigations and provide a framework for evaluating the accuracy of scientific theories and models.

Define Hypothesis

A hypothesis, on the other hand, is a tentative explanation or proposition that seeks to explain a phenomenon or answer a research question. It serves as a starting point for scientific investigations and provides a framework for designing experiments and gathering empirical evidence.

Hypotheses are formulated based on existing knowledge, observations, and theories. They are often derived from previous research findings, logical reasoning, or insights gained from preliminary studies. A hypothesis is a testable statement that can be either supported or refuted through empirical evidence.

Unlike predictions, which focus on specific future outcomes, hypotheses aim to explain the underlying mechanisms or causes behind observed phenomena. They are typically stated in an “if-then” format, suggesting a cause-and-effect relationship between variables.

For example, in the field of psychology, a hypothesis might propose that individuals who receive positive reinforcement for a certain behavior are more likely to repeat that behavior in the future. This hypothesis can then be tested through experiments or observational studies to determine its validity.

Hypotheses play a crucial role in the scientific method as they guide the collection and analysis of data. They provide a framework for researchers to make logical deductions, draw conclusions, and contribute to the existing body of knowledge in their respective fields.

How To Properly Use The Words In A Sentence

When it comes to scientific research and analysis, understanding the distinction between prediction and hypothesis is crucial. While these terms are often used interchangeably, they have distinct meanings and should be used appropriately in a sentence. In this section, we will explore how to effectively use both prediction and hypothesis in a sentence.

How To Use “Prediction” In A Sentence

When using the term “prediction” in a sentence, it is important to convey a sense of anticipation or forecasting. Predictions are statements that suggest what may happen in the future based on existing evidence or patterns. Here are a few examples of how to use “prediction” in a sentence:

  • Scientists predict that global temperatures will continue to rise due to increased greenhouse gas emissions.
  • Based on historical data, economists predict a recession in the next fiscal year.
  • The weather forecast predicts heavy rainfall in the region tomorrow.

As you can see, the word “prediction” is used to indicate an expected outcome or result based on logical reasoning, analysis, or observation. It is often employed in scientific, economic, or weather-related contexts.

How To Use “Hypothesis” In A Sentence

Unlike a prediction, a hypothesis is a proposed explanation or theory that is subject to testing and evaluation. It is an educated guess or assumption that serves as the foundation for scientific inquiry. Here are a few examples of how to use “hypothesis” in a sentence:

  • The researcher formulated a hypothesis to explain the observed phenomenon.
  • Before conducting the experiment, the scientists developed a hypothesis to guide their investigation.
  • The hypothesis suggested that increased exposure to sunlight would enhance plant growth.

As demonstrated in these examples, a hypothesis is typically used in scientific contexts to propose a tentative explanation for a phenomenon or to guide the process of experimentation and observation. It is an essential component of the scientific method and plays a crucial role in advancing knowledge and understanding.

More Examples Of Prediction & Hypothesis Used In Sentences

In this section, we will explore various examples of how the terms “prediction” and “hypothesis” can be used in sentences. These examples will help us grasp a better understanding of the context in which these terms are commonly employed.

Examples Of Using Prediction In A Sentence

  • Based on the current market trends, our prediction is that the stock prices will soar in the next quarter.
  • She made a prediction that the new marketing campaign would significantly boost sales.
  • His accurate prediction of the election outcome impressed the political analysts.
  • The weather forecast predicts heavy rainfall tomorrow, so be prepared.
  • Our prediction is that the demand for renewable energy will continue to rise in the coming years.
  • The economist’s prediction of an economic recession was met with skepticism by some experts.
  • Despite the odds, his prediction of winning the championship turned out to be correct.
  • In her prediction, she foresaw a decline in customer satisfaction due to poor product quality.
  • The scientist’s prediction that the experiment would yield groundbreaking results proved to be accurate.
  • Based on the data analysis, the prediction is that the company’s revenue will double by the end of the year.

Examples Of Using Hypothesis In A Sentence

  • The researcher formulated a hypothesis to test the effects of the new drug on cancer cells.
  • His hypothesis suggests that increased exposure to sunlight leads to higher vitamin D levels.
  • Before conducting the experiment, the scientists developed a hypothesis to guide their research.
  • According to the hypothesis, the higher the temperature, the faster the chemical reaction will occur.
  • She proposed a hypothesis that lack of sleep negatively impacts cognitive performance.
  • The hypothesis states that people who exercise regularly have lower risks of developing heart disease.
  • Through careful observation and analysis, the researcher confirmed his hypothesis about plant growth.
  • The hypothesis that increased stress levels lead to a weakened immune system has been widely studied.
  • Scientists are currently testing a hypothesis that suggests a link between certain foods and allergies.
  • Her hypothesis regarding the impact of social media on mental health sparked a lively debate among experts.

Common Mistakes To Avoid

When it comes to scientific research and analysis, it is crucial to understand the distinction between prediction and hypothesis. Unfortunately, many individuals mistakenly use these terms interchangeably, leading to confusion and potentially flawed conclusions. In order to prevent such errors and ensure accurate scientific discourse, it is important to be aware of the common mistakes made when using prediction and hypothesis incorrectly.

1. Failing To Recognize The Fundamental Difference

One of the most prevalent mistakes is the failure to recognize the fundamental difference between prediction and hypothesis. While both concepts are integral to scientific inquiry, they serve distinct purposes and involve different levels of certainty.

A prediction is a statement or claim about a future event or outcome based on existing knowledge or observations. It is typically derived from patterns or trends identified in data and aims to forecast what is likely to happen. Predictions are often expressed in probabilistic terms, acknowledging the inherent uncertainty associated with future events.

On the other hand, a hypothesis is a proposed explanation or tentative answer to a research question. It is formulated prior to conducting any experiments or gathering data and serves as a starting point for scientific investigation. Hypotheses are testable and falsifiable, allowing researchers to either support or reject them based on empirical evidence.

2. Using Prediction As A Substitute For Hypothesis

Another common mistake is using prediction as a substitute for hypothesis. This error arises when individuals make assumptions about the cause-and-effect relationship between variables without providing a clear rationale or theoretical framework.

For instance, stating that “increased consumption of vitamin C will lead to a decrease in the risk of developing a cold” is a prediction, not a hypothesis. In this case, the statement lacks the necessary explanation of the underlying mechanism or the specific factors that would support or refute the claim.

A hypothesis, in contrast, would involve formulating a more comprehensive statement such as “increased consumption of vitamin C enhances the immune system, leading to a decrease in the risk of developing a cold.” This hypothesis provides a theoretical basis for the expected relationship between vitamin C intake and cold prevention, allowing for further investigation and testing.

3. Overlooking The Role Of Experimentation

One crucial aspect that distinguishes a hypothesis from a prediction is the involvement of experimentation. A hypothesis is typically tested through systematic observation, data collection, and analysis, whereas a prediction focuses on making forecasts without necessarily requiring empirical evidence.

It is a common mistake to overlook the importance of experimentation and treat predictions as equivalent to hypotheses. While predictions can be valuable in guiding research and generating hypotheses, they should not be conflated with the rigorous scientific process of hypothesis testing.

4. Ignoring The Role Of Falsifiability

Falsifiability is a key criterion for a hypothesis, but it does not hold the same significance for predictions. A hypothesis must be formulated in a way that allows for the possibility of being proven false through empirical evidence. If a hypothesis cannot be disproven or tested, it lacks scientific validity.

However, predictions do not necessarily need to be falsifiable. They can be based on probabilities, trends, or observations without the requirement of being disproven. Predictions can be revised or refined based on new information, while hypotheses are subject to the possibility of being rejected or supported by empirical evidence.

5. Neglecting The Importance Of Context

Lastly, it is essential to consider the context in which predictions and hypotheses are used. The appropriate usage of these terms may vary depending on the field of study, research design, or specific scientific discipline.

For example, in the social sciences, predictions are often used to forecast human behavior or societal trends, whereas hypotheses are employed to test theoretical frameworks or explanatory models. Understanding the disciplinary nuances and context-specific conventions is crucial to avoid misinterpretation or confusion when using prediction and hypothesis interchangeably.

By being mindful of these common mistakes, researchers and individuals engaging in scientific discourse can ensure accurate and effective communication, fostering a more robust understanding of the scientific method and its applications.

Context Matters

When it comes to scientific inquiry and research, the choice between prediction and hypothesis is not always straightforward. The context in which these terms are used plays a crucial role in determining which one is more appropriate. Understanding this context is essential for researchers and scientists to effectively communicate their ideas and findings. Let’s delve into the intricacies of this choice and explore some examples of different contexts where the preference between prediction and hypothesis might shift.

1. Experimental Research

In experimental research, where scientists conduct controlled experiments to test their theories, the choice between prediction and hypothesis is often influenced by the nature of the study. In this context, a hypothesis is typically formulated as an educated guess or a tentative explanation for a phenomenon. It serves as the starting point for the research, guiding the experimental design and data analysis. For example, in a study investigating the effects of a new drug on blood pressure, a hypothesis could be formulated as follows:

Hypothesis: The administration of Drug X will lead to a significant decrease in blood pressure compared to a placebo.

On the other hand, predictions in experimental research are often specific statements about the expected outcomes of the experiment. They are derived from the hypothesis and are used to guide data collection and analysis. For instance, a prediction based on the above hypothesis could be:

Prediction: Participants who receive Drug X will show a mean decrease in systolic blood pressure of at least 10 mmHg compared to those who receive the placebo.

Therefore, in the context of experimental research, the choice between prediction and hypothesis depends on whether the researcher is formulating an initial explanation or making specific statements about the expected outcomes.

2. Observational Studies

In observational studies, where researchers observe and analyze existing data without intervening or manipulating variables, the choice between prediction and hypothesis may vary. In this context, hypotheses are often used to propose associations or relationships between variables. For example, in a study examining the relationship between physical activity and mental health, a hypothesis could be:

Hypothesis: There is a positive correlation between physical activity levels and mental well-being.

On the other hand, predictions in observational studies are often based on previous research or theoretical frameworks. They are specific statements about the expected outcomes or patterns in the data. For instance, a prediction based on the above hypothesis could be:

Prediction: Individuals who engage in regular physical activity will report higher levels of subjective well-being compared to those who lead a sedentary lifestyle.

In the context of observational studies, the choice between prediction and hypothesis depends on whether the researcher is proposing a general association between variables or making specific predictions based on existing knowledge.

3. Theoretical Research

In theoretical research, where scientists develop and refine theoretical frameworks or models, the choice between prediction and hypothesis may take a different form. In this context, hypotheses are often used to propose theoretical explanations or mechanisms. For example, in a study exploring the mechanisms of climate change, a hypothesis could be:

Hypothesis: Changes in greenhouse gas concentrations lead to alterations in the Earth’s temperature through the greenhouse effect.

On the other hand, predictions in theoretical research are often derived from the established theories or models. They are specific statements about the expected outcomes or patterns in the system being studied. For instance, a prediction based on the above hypothesis could be:

Prediction: The increase in greenhouse gas emissions will result in a rise in global average temperatures by at least 2 degrees Celsius over the next century.

In the context of theoretical research, the choice between prediction and hypothesis depends on whether the researcher is proposing a theoretical explanation or making specific predictions based on established theories or models.

Exceptions To The Rules

In most cases, the rules for using prediction and hypothesis provide a solid framework for scientific inquiry and logical reasoning. However, there are a few exceptional situations where these rules might not apply in the same way. Let’s explore some of these exceptions and delve into brief explanations and examples for each case.

1. Historical Analysis

In the realm of historical analysis, the application of prediction and hypothesis can be challenging due to the lack of controlled experiments and the inability to directly test hypotheses. Instead, historians often rely on interpretation and inference to understand past events.

For example, when studying ancient civilizations, historians may propose hypotheses to explain the rise and fall of empires based on available evidence. However, these hypotheses are often subject to interpretation and can be influenced by personal biases. While predictions can be made about potential outcomes, they cannot be tested in the same way as in experimental sciences.

2. Complex Systems

Complex systems, such as climate patterns, ecosystems, or the human brain, present another exception to the strict application of prediction and hypothesis. These systems involve numerous interconnected variables and intricate feedback loops, making it difficult to formulate precise predictions or testable hypotheses.

For instance, predicting the exact trajectory of a hurricane or the behavior of a specific species within an ecosystem is a complex task due to the multitude of factors at play. While scientists can develop models and make predictions based on existing data, the inherent complexity of these systems often leads to a margin of error and a level of uncertainty.

3. Unpredictable Events

Some events are inherently unpredictable, rendering the traditional use of prediction and hypothesis ineffective. These events are often characterized by their randomness or chaotic nature, making it impossible to accurately forecast outcomes or formulate hypotheses.

Consider, for instance, the stock market. Despite the use of sophisticated algorithms and mathematical models, predicting stock prices with absolute certainty remains elusive. The interplay of various factors, including global events, investor sentiment, and market psychology, makes it challenging to establish reliable predictions or testable hypotheses.

4. Creative And Artistic Endeavors

In creative and artistic endeavors, the rigid application of prediction and hypothesis may hinder the freedom of expression and innovation. Artists, writers, and musicians often rely on intuition, inspiration, and experimentation to create their works.

For example, a painter may not be able to predict the exact outcome of their artistic process or formulate a hypothesis about the emotional impact of their artwork. Instead, they explore different techniques, colors, and compositions, allowing their creativity to guide them. While some predictions or hypotheses may emerge during the creative process, they are often secondary to the expressive and subjective nature of the art form.

While prediction and hypothesis serve as valuable tools in scientific inquiry and logical reasoning, there are exceptions where their application may not be as straightforward. Historical analysis, complex systems, unpredictable events, and creative endeavors all present unique challenges that deviate from the traditional use of prediction and hypothesis. Recognizing these exceptions allows us to appreciate the diversity of knowledge and the multifaceted nature of human endeavors.

Understanding the distinction between prediction and hypothesis is crucial for any individual seeking to engage in scientific inquiry or critical thinking. While both concepts involve making educated guesses about the future or unknown, their underlying principles and applications differ significantly.

A prediction is a specific statement that anticipates a certain outcome based on existing knowledge or patterns. It is often derived from empirical evidence or logical reasoning and aims to forecast a particular event or phenomenon. Predictions are commonly used in fields such as meteorology, economics, and sports analytics to make informed decisions and plan for the future.

On the other hand, a hypothesis is a tentative explanation or proposition that seeks to explain a phenomenon or answer a research question. It is formulated based on preliminary observations, prior knowledge, and logical reasoning. Hypotheses serve as the foundation for scientific investigations, guiding the design of experiments and data analysis.

While predictions focus on the outcome of a specific event or situation, hypotheses aim to provide a broader understanding of the underlying mechanisms or causes. Predictions are often more straightforward and can be directly tested or validated through observation or experimentation. In contrast, hypotheses require rigorous testing and analysis to evaluate their validity and support or refute them.

Overall, predictions and hypotheses are both valuable tools in different contexts. Predictions help us make informed decisions and anticipate future outcomes, while hypotheses drive scientific exploration and contribute to the advancement of knowledge. By recognizing the distinction between these concepts, we can enhance our critical thinking skills and approach problem-solving with greater clarity and precision.

Shawn Manaher

Shawn Manaher

Shawn Manaher is the founder and creative force behind GrammarBeast.com. A seasoned entrepreneur and language enthusiast, he is dedicated to making grammar and spelling both fun and accessible. Shawn believes in the power of clear communication and is passionate about helping people master the intricacies of the English language.

Fiber vs Iron: Do These Mean The Same? How To Use Them

Relation vs Relationship: Differences And Uses For Each One

© 2024 GrammarBeast.com - All Rights Reserved.

Privacy Policy

Terms of Service

  • Skip to primary navigation
  • Skip to main content
  • Skip to footer

Science Struck

Science Struck

What’s the Real Difference Between Hypothesis and Prediction

Both hypothesis and prediction fall in the realm of guesswork, but with different assumptions. This Buzzle write-up below will elaborate on the differences between hypothesis and prediction.

Like it? Share it!

What's the Difference Between Hypothesis and Prediction

“There is no justifiable prediction about how the hypothesis will hold up in the future; its degree of corroboration simply is a historical statement describing how severely the hypothesis has been tested in the past.” ― Robert Nozick, American author, professor, and philosopher

A lot of people tend to think that a hypothesis is the same as prediction, but this is not true. They are entirely different terms, though they can be manifested within the same example. They are both entities that stem from statistics, and are used in a variety of applications like finance, mathematics, science (widely), sports, psychology, etc. A hypothesis may be a prediction, but the reverse may not be true.

Also, a prediction may or may not agree with the hypothesis. Confused? Don’t worry, read the hypothesis vs. prediction comparison, provided below with examples, to clear your doubts regarding both these entities.

  • A hypothesis is a kind of guess or proposition regarding a situation.
  • It can be called a kind of intelligent guess or prediction, and it needs to be proved using different methods.
  • Formulating a hypothesis is an important step in experimental design, for it helps to predict things that might take place in the course of research.
  • The strength of the statement is based on how effectively it is proved while conducting experiments.
  • It is usually written in the ‘If-then-because’ format.
  • For example, ‘ If Susan’s mood depends on the weather, then she will be happy today, because it is bright and sunny outside. ‘. Here, Susan’s mood is the dependent variable, and the weather is the independent variable. Thus, a hypothesis helps establish a relationship.
  • A prediction is also a type of guess, in fact, it is a guesswork in the true sense of the word.
  • It is not an educated guess, like a hypothesis, i.e., it is based on established facts.
  • While making a prediction for various applications, you have to take into account all the current observations.
  • It can be testable, but just once. This goes to prove that the strength of the statement is based on whether the predicted event occurs or not.
  • It is harder to define, and it contains many variations, which is why, probably, it is confused to be a fictional guess or forecast.
  • For example, He is studying very hard, he might score an A . Here, we are predicting that since the student is working hard, he might score good marks. It is based on an observation and does not establish any relationship.

Factors of Differentiation

It has a longer structure, a situation can be interpreted with different kinds of hypothesis (null, alternative, research hypothesis, etc.), and it may need different methods to prove as well. It mostly has a shorter structure, since it can be a simple opinion, based on what you think might happen.
It contains independent and dependent variables, and it helps establish a relationship between them. It also helps analyze the relationships through different experimentation techniques. It does not contain any variables or relationships, and the statement analysis is not elaborate. In fact, it is not exactly analyzed. Since it is a straightforward probability, it is tested once and done with.
It can go through multiple testing stages. Also, its story does not end with just the testing phase; for instance, tomorrow your hypothesis could be challenged by someone else, and a contrary proof might arise. It has a longer time span. As already mentioned in the earlier point, it can be proven just once. You predict something; if it occurs, your statement is right, if it does not occur, your statement is wrong. That’s it, end of story.
It is based on facts, and the results are recorded and used in science and other applications. It is a speculated, testable, educational guess, but it is certainly not fictional. Even though it is based on pure observations and already existing facts, it is linked with forecasting and fiction. This is because, you are purely guessing the outcomes, there may or may not be scientific backing. The person making a prediction may or may not have knowledge about the problem statement, thus it may exist only in a fictional context.

♦ Consider a statement, ‘If I add some chili powder, the pasta may become spicy’. This is a hypothesis, and a testable statement. You can carry on adding 1 pinch of chili powder, or a spoon, or two spoons, and so on. The dish may become spicier or pungent, or there may be no reaction at all. The sum and substance is that, the amount of chili powder is the independent variable here, and the pasta dish is the dependent variable, which is expected to change with the addition of chili powder. This statement thus establishes and analyzes the relationship between both variables, and you will get a variety of results when the test is performed multiple times. Your hypothesis may even be opposed tomorrow.

♦ Consider the statement, ‘Robert has longer legs, he may run faster’. This is just a prediction. You may have read somewhere that people with long legs tend to run faster. It may or may not be true. What is important here is ‘Robert’. You are talking only of Robert’s legs, so you will test if he runs faster. If he does, your prediction is true, if he doesn’t, your prediction is false. No more testing.

♦ Consider a statement, ‘If you eat chocolates, you may get acne’. This is a simple hypothesis, based on facts, yet necessary to be proven. It can be tested on a number of people. It may be true, it may be false. The fact is, it defines a relationship between chocolates and acne. The relationship can be analyzed and the results can be recorded. Tomorrow, someone might come up with an alternative hypothesis that chocolate does not cause acne. This will need to be tested again, and so on. A hypothesis is thus, something that you think happens due to a reason.

♦ Consider a statement, ‘The sky is overcast, it may rain today’. A simple guess, based on the fact that it generally rains if the sky is overcast. It may not even be testable, i.e., the sky can be overcast now and clear the next minute. If it does rain, you have predicted correctly. If it does not, you are wrong. No further analysis or questions.

Both hypothesis and prediction need to be effectively structured so that further analysis of the problem statement is easier. Remember that, the key difference between the two is the procedure of proving the statements. Also, you cannot state one is better than the other, this depends entirely on the application in hand.

Get Updates Right to Your Inbox

Privacy overview.

  • Key Differences

Know the Differences & Comparisons

Difference Between Hypothesis and Prediction

hypothesis vs prediction

Due to insufficient knowledge, many misconstrue hypothesis for prediction, which is wrong, as these two are entirely different. Prediction is forecasting of future events, which is sometimes based on evidence or sometimes, on a person’s instinct or gut feeling. So take a glance at the article presented below, which elaborates the difference between hypothesis and prediction.

Content: Hypothesis Vs Prediction

Comparison chart.

Basis for ComparisonHypothesisPrediction
MeaningHypothesis implies proposed explanation for an observable event, made on the basis of established facts, as an introduction to further investigation.Prediction refers to a statement, which tells or estimates something, that will occur in future.
What is it?A tentative supposition, that is capable of being tested through scientific methods.A declaration made beforehand on what is expected to happen next, in the sequence of events.
GuessEducated guessPure guess
Based onFacts and evidence.May or may not be based on facts or evidences.
ExplanationYesNo
FormulationTakes long time.Takes comparatively short time.
DescribesA phenomenon, which might be a future or past event/occurrence.Future occurrence/event.
RelationshipStates casual correlation between variables.Does not states correlation between variables.

Definition of Hypothesis

In simple terms, hypothesis means a sheer assumption which can be approved or disapproved. For the purpose of research, the hypothesis is defined as a predictive statement, which can be tested and verified using the scientific method. By testing the hypothesis, the researcher can make probability statements on the population parameter. The objective of the hypothesis is to find the solution to a given problem.

A hypothesis is a mere proposition which is put to the test to ascertain its validity. It states the relationship between an independent variable to some dependent variable. The characteristics of the hypothesis are described as under:

  • It should be clear and precise.
  • It should be stated simply.
  • It must be specific.
  • It should correlate variables.
  • It should be consistent with most known facts.
  • It should be capable of being tested.
  • It must explain, what it claims to explain.

Definition of Prediction

A prediction is described as a statement which forecasts a future event, which may or may not be based on knowledge and experience, i.e. it can be a pure guess based on the instinct of a person. It is termed as an informed guess, when the prediction comes out from a person having ample subject knowledge and uses accurate data and logical reasoning, to make it.

Regression analysis is one of the statistical technique, which is used for making the prediction.

In many multinational corporations, futurists (predictors) are paid a good amount for making prediction relating to the possible events, opportunities, threats or risks. And to do so, the futurists, study all past and current events, to forecast future occurrences. Further, it has a great role to play in statistics also, to draw inferences about a population parameter.

Key Differences Between Hypothesis and Prediction

The difference between hypothesis and prediction can be drawn clearly on the following grounds:

  • A propounded explanation for an observable occurrence, established on the basis of established facts, as an introduction to the further study, is known as the hypothesis. A statement, which tells or estimates something that will occur in future is known as the prediction.
  • The hypothesis is nothing but a tentative supposition which can be tested by scientific methods. On the contrary, the prediction is a sort of declaration made in advance on what is expected to happen next, in the sequence of events.
  • While the hypothesis is an intelligent guess, the prediction is a wild guess.
  • A hypothesis is always supported by facts and evidence. As against this, predictions are based on knowledge and experience of the person making it, but that too not always.
  • Hypothesis always have an explanation or reason, whereas prediction does not have any explanation.
  • Hypothesis formulation takes a long time. Conversely, making predictions about a future happening does not take much time.
  • Hypothesis defines a phenomenon, which may be a future or a past event. Unlike, prediction, which always anticipates about happening or non-happening of a certain event in future.
  • The hypothesis states the relationship between independent variable and the dependent variable. On the other hand, prediction does not state any relationship between variables.

To sum up, the prediction is merely a conjecture to discern future, while a hypothesis is a proposition put forward for the explanation. The former, can be made by any person, no matter he/she has knowledge in the particular field. On the flip side, the hypothesis is made by the researcher to discover the answer to a certain question. Further, the hypothesis has to pass to various test, to become a theory.

You Might Also Like:

fact vs opinion

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Hypothesis vs. Prediction

What's the difference.

Hypothesis and prediction are both important components of the scientific method, but they serve different purposes. A hypothesis is a proposed explanation or statement that can be tested through experimentation or observation. It is based on prior knowledge, observations, or theories and is used to guide scientific research. On the other hand, a prediction is a specific statement about what will happen in a particular situation or experiment. It is often derived from a hypothesis and serves as a testable outcome that can be confirmed or refuted through data analysis. While a hypothesis provides a broader framework for scientific inquiry, a prediction is a more specific and measurable expectation of the results.

AttributeHypothesisPrediction
DefinitionA proposed explanation or answer to a scientific questionAn educated guess about what will happen in a specific situation or experiment
RoleForms the basis for scientific investigation and experimentationHelps guide the design and conduct of experiments
TestabilityCan be tested through experiments or observationsCan be tested to determine its accuracy or validity
ScopeBroader in nature, often explaining a phenomenon or relationshipSpecific to a particular situation or experiment
FormulationBased on prior knowledge, observations, and data analysisBased on prior knowledge, observations, and data analysis
OutcomeCan be supported or rejected based on evidenceCan be confirmed or disproven based on the observed results
Level of CertaintyLess certain than a theory, but can become more certain with supporting evidenceLess certain than a theory, but can become more certain with supporting evidence

Further Detail

Introduction.

When it comes to scientific research and inquiry, two important concepts that often come into play are hypothesis and prediction. Both of these terms are used to make educated guesses or assumptions about the outcome of an experiment or study. While they share some similarities, they also have distinct attributes that set them apart. In this article, we will explore the characteristics of hypothesis and prediction, highlighting their differences and similarities.

A hypothesis is a proposed explanation or statement that can be tested through experimentation or observation. It is typically formulated based on existing knowledge, observations, or theories. A hypothesis is often used as a starting point for scientific research, as it provides a framework for investigation and helps guide the research process.

One of the key attributes of a hypothesis is that it is testable. This means that it can be subjected to empirical evidence and observations to determine its validity. A hypothesis should be specific and measurable, allowing researchers to design experiments or gather data to either support or refute the hypothesis.

Another important aspect of a hypothesis is that it is falsifiable. This means that it is possible to prove the hypothesis wrong through experimentation or observation. Falsifiability is crucial in scientific research, as it ensures that hypotheses can be objectively tested and evaluated.

Hypotheses can be classified into two main types: null hypotheses and alternative hypotheses. A null hypothesis states that there is no significant relationship or difference between variables, while an alternative hypothesis proposes the existence of a relationship or difference. These two types of hypotheses are often used in statistical analysis to draw conclusions from data.

In summary, a hypothesis is a testable and falsifiable statement that serves as a starting point for scientific research. It is specific, measurable, and can be either a null or alternative hypothesis.

While a hypothesis is a proposed explanation or statement, a prediction is a specific outcome or result that is anticipated based on existing knowledge or theories. Predictions are often made before conducting an experiment or study and serve as a way to anticipate the expected outcome.

Unlike a hypothesis, a prediction is not necessarily testable or falsifiable on its own. Instead, it is used to guide the research process and provide a basis for comparison with the actual results obtained from the experiment or study. Predictions can be based on previous research, theoretical models, or logical reasoning.

One of the key attributes of a prediction is that it is specific and precise. It should clearly state the expected outcome or result, leaving little room for ambiguity. This allows researchers to compare the prediction with the actual results and evaluate the accuracy of their anticipated outcome.

Predictions can also be used to generate hypotheses. By making a prediction and comparing it with the actual results, researchers can identify discrepancies or unexpected findings. These observations can then be used to formulate new hypotheses and guide further research.

In summary, a prediction is a specific anticipated outcome or result that is not necessarily testable or falsifiable on its own. It serves as a basis for comparison with the actual results obtained from an experiment or study and can be used to generate new hypotheses.

Similarities

While hypotheses and predictions have distinct attributes, they also share some similarities in the context of scientific research. Both hypotheses and predictions are based on existing knowledge, observations, or theories. They are both used to make educated guesses or assumptions about the outcome of an experiment or study.

Furthermore, both hypotheses and predictions play a crucial role in the scientific method. They provide a framework for research, guiding the design of experiments, data collection, and analysis. Both hypotheses and predictions are subject to evaluation and revision based on empirical evidence and observations.

Additionally, both hypotheses and predictions can be used to generate new knowledge and advance scientific understanding. By testing hypotheses and comparing predictions with actual results, researchers can gain insights into the relationships between variables, uncover new phenomena, or challenge existing theories.

Overall, while hypotheses and predictions have their own unique attributes, they are both integral components of scientific research and inquiry.

In conclusion, hypotheses and predictions are important concepts in scientific research. While a hypothesis is a testable and falsifiable statement that serves as a starting point for investigation, a prediction is a specific anticipated outcome or result that guides the research process. Hypotheses are specific, measurable, and can be either null or alternative, while predictions are precise and serve as a basis for comparison with actual results.

Despite their differences, hypotheses and predictions share similarities in terms of their reliance on existing knowledge, their role in the scientific method, and their potential to generate new knowledge. Both hypotheses and predictions contribute to the advancement of scientific understanding and play a crucial role in the research process.

By understanding the attributes of hypotheses and predictions, researchers can effectively formulate research questions, design experiments, and analyze data. These concepts are fundamental to the scientific method and are essential for the progress of scientific research and inquiry.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

Home

  • Peterborough

an student standing in front of a blackboard full of physics and Math formulas.

Understanding Hypotheses and Predictions

Hypotheses and predictions are different components of the scientific method. The scientific method is a systematic process that helps minimize bias in research and begins by developing good research questions.

Research Questions

Descriptive research questions are based on observations made in previous research or in passing. This type of research question often quantifies these observations. For example, while out bird watching, you notice that a certain species of sparrow made all its nests with the same material: grasses. A descriptive research question would be “On average, how much grass is used to build sparrow nests?”

Descriptive research questions lead to causal questions. This type of research question seeks to understand why we observe certain trends or patterns. If we return to our observation about sparrow nests, a causal question would be “Why are the nests of sparrows made with grasses rather than twigs?”

In simple terms, a hypothesis is the answer to your causal question. A hypothesis should be based on a strong rationale that is usually supported by background research. From the question about sparrow nests, you might hypothesize, “Sparrows use grasses in their nests rather than twigs because grasses are the more abundant material in their habitat.” This abundance hypothesis might be supported by your prior knowledge about the availability of nest building materials (i.e. grasses are more abundant than twigs).

On the other hand, a prediction is the outcome you would observe if your hypothesis were correct. Predictions are often written in the form of “if, and, then” statements, as in, “if my hypothesis is true, and I were to do this test, then this is what I will observe.” Following our sparrow example, you could predict that, “If sparrows use grass because it is more abundant, and I compare areas that have more twigs than grasses available, then, in those areas, nests should be made out of twigs.” A more refined prediction might alter the wording so as not to repeat the hypothesis verbatim: “If sparrows choose nesting materials based on their abundance, then when twigs are more abundant, sparrows will use those in their nests.”

As you can see, the terms hypothesis and prediction are different and distinct even though, sometimes, they are incorrectly used interchangeably.

Let us take a look at another example:

Causal Question:  Why are there fewer asparagus beetles when asparagus is grown next to marigolds?

Hypothesis: Marigolds deter asparagus beetles.

Prediction: If marigolds deter asparagus beetles, and we grow asparagus next to marigolds, then we should find fewer asparagus beetles when asparagus plants are planted with marigolds.

A final note

It is exciting when the outcome of your study or experiment supports your hypothesis. However, it can be equally exciting if this does not happen. There are many reasons why you can have an unexpected result, and you need to think why this occurred. Maybe you had a potential problem with your methods, but on the flip side, maybe you have just discovered a new line of evidence that can be used to develop another experiment or study.

Pediaa.Com

Home » Language » English Language » Words and Meanings » What is the Difference Between Hypothesis and Prediction

What is the Difference Between Hypothesis and Prediction

The main difference between hypothesis and prediction is that the hypothesis proposes an explanation to something which has already happened whereas the prediction proposes something that might happen in the future.

Hypothesis and prediction are two significant concepts that give possible explanations to several occurrences or phenomena. As a result, one may be able to draw conclusions that assist in formulating new theories , which can affect the future advancements in the human civilizations. Thus, both these terms are common in the field of science, research and logic. In addition, to make a prediction, one should need evidence or observation whereas one can formulate a hypothesis based on limited evidence .

Key Areas Covered

1. What is a Hypothesis      – Definition, Features 2. What is a Prediction      – Definition, Features 3. What is the Relationship Between Hypothesis and Prediction      – Outline of Common Features 4. What is the Difference Between Hypothesis and Prediction      – Comparison of Key Differences

Hypothesis, Logic, Prediction, Theories, Science

Difference Between Hypothesis and Prediction - Comparison Summary

What is a Hypothesis

By definition, a hypothesis refers to a supposition or a proposed explanation made on the basis of limited evidence as a starting point for further investigation. In brief, a hypothesis is a proposed explanation for a phenomenon.  Nevertheless, this is based on limited evidence, facts or information one has based on the underlying causes of the problem. However, it can be further tested by experimentation. Therefore, this is yet to be proven as correct.

This term hypothesis is, thus, used often in the field of science and research than in general usage. In science, it is termed as a scientific hypothesis. However, a scientific hypothesis has to be tested by a scientific method. Moreover, scientists usually base scientific hypotheses on previous observations which cannot be explained by the existing scientific theories.

Main Difference - Hypothesis vs Prediction

Figure 01: A Hypothesis on Colonial Flagellate

In research studies, a hypothesis is based on independent and dependent variables. This is known as a ‘working hypothesis’, and it is provisionally accepted as a basis for further research, and often serves as a conceptual framework in qualitative research. As a result, based on the gathered facts in research, the hypothesis tends to create links or connections between the different variables. Thus, it will work as a source for a more concrete scientific explanation.

Hence, one can formulate a theory based on the hypothesis to lead on the investigation to the problem. A strong hypothesis can create effective predictions based on reasoning. As a result, a hypothesis can predict the outcome of an experiment in a laboratory or the observation of a natural phenomenon. Hence, a hypothesis is known as an ‘educated guess’.

What is a Prediction

A prediction can be defined as a thing predicted or a forecast. Hence, a prediction is a statement about something that might happen in the future. Thus, one can guess as to what might happen based on the existing evidence or observations.

In the general context, although it is difficult to predict the uncertain future, one can draw conclusions as to what might happen in the future based on the observations of the present. This will assist in avoiding negative consequences in the future when there are dangerous occurrences in the present.

Moreover, there is a link between hypothesis and prediction. A strong hypothesis will enable possible predictions. This link between a hypothesis and a prediction can be clearly observed in the field of science.

Figure 2: Weather Predictions

Hence, in scientific and research studies, a prediction is a specific design that can be used to test one’s hypothesis. Thus, the prediction is the outcome one can observe if their hypothesis were supported with experiment. Moreover, predictions are often written in the form of “if, then” statements; for example, “if my hypothesis is true, then this is what I will observe.”

Relationship Between Hypothesis and Prediction

  • Based on a hypothesis, one can create a prediction
  • Also, a hypothesis will enable predictions through the act of deductive reasoning.
  • Furthermore, the prediction is the outcome that can be observed if the hypothesis were supported proven by the experiment.

Difference Between Hypothesis and Prediction

Hypothesis refers to the supposition or proposed explanation made on the basis of limited evidence, as a starting point for further investigation. On the other hand, prediction refers to a thing that is predicted or a forecast of something. Thus, this explains the main difference between hypothesis and prediction.

Interpretation

Hypothesis will lead to explaining why something happened while prediction will lead to interpreting what might happen according to the present observations. This is a major difference between hypothesis and prediction.

Another difference between hypothesis and prediction is that hypothesis will result in providing answers or conclusions to a phenomenon, leading to theory, while prediction will result in providing assumptions for the future or a forecast.

While a hypothesis is directly related to statistics, a prediction, though it may invoke statistics, will only bring forth probabilities.

Moreover, hypothesis goes back to the beginning or causes of the occurrence while prediction goes forth to the future occurrence.

The ability to be tested is another difference between hypothesis and prediction. A hypothesis can be tested, or it is testable whereas a prediction cannot be tested until it really happens.

Hypothesis and prediction are integral components in scientific and research studies. However, they are also used in the general context. Hence, hypothesis and prediction are two distinct concepts although they are related to each other as well. The main difference between hypothesis and prediction is that hypothesis proposes an explanation to something which has already happened whereas prediction proposes something that might happen in the future.

1. “Prediction.” Wikipedia, Wikimedia Foundation, 17 Sept. 2018, Available here . 2. “Hypothesis.” Wikipedia, Wikimedia Foundation, 20 Sept. 2018, Available here . 3. Bradford, Alina. “What Is a Scientific Hypothesis? | Definition of Hypothesis.” LiveScience, Purch, 26 July 2017, Available here . 4. “Understanding Hypotheses and Predictions.” The Academic Skills Centre Trent University, Available here .

Image Courtesy:

1. “Colonial Flagellate Hypothesis” By Katelynp1 – Own work (CC BY-SA 3.0) via Commons Wikimedia 2. “USA weather forecast 2006-11-07” By NOAA – (Public Domain) via Commons Wikimedia

' src=

About the Author: Upen

Upen, BA (Honours) in Languages and Linguistics, has academic experiences and knowledge on international relations and politics. Her academic interests are English language, European and Oriental Languages, Internal Affairs and International Politics, and Psychology.

​You May Also Like These

Leave a reply cancel reply.

  • ABBREVIATIONS
  • BIOGRAPHIES
  • CALCULATORS
  • CONVERSIONS
  • DEFINITIONS

Grammar.com

Grammar Tips & Articles »

Hypothesis vs. prediction, go to grammar.com to read about hypothesis vs. prediction and their differences. relax and enjoy.

hypothesis versus prediction

 Email    Print    

Have a discussion about this article with the community:

 width=

Report Comment

We're doing our best to make sure our content is useful, accurate and safe. If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we'll take care of it shortly.

You need to be logged in to favorite .

Create a new account.

Your name: * Required

Your email address: * Required

Pick a user name: * Required

Username: * Required

Password: * Required

Forgot your password?    Retrieve it

Use the citation below to add this article to your bibliography:

Style: MLA Chicago APA

"Hypothesis vs. Prediction." Grammar.com. STANDS4 LLC, 2024. Web. 19 Aug. 2024. < https://www.grammar.com/hypothesis_vs._prediction >.

Cite.Me

The Web's Largest Resource for

Grammar & spelling, a member of the stands4 network, checkout our entire collection of, grammar articles.

  • charlatan - vocabulary
  • sufficient - correct spelling
  • persuasion - correct spelling
  • epithet - vocabulary

See more 

Free, no signup required :

Add to chrome.

Two clicks install »

Add to Firefox

Browse grammar.com.

hypothesis versus prediction

Free Writing Tool :

Instant grammar checker.

Improve your grammar, vocabulary, style, and writing — all for FREE !

Try it now »

Are you a grammar master?

Choose the sentence with correct use of the definite article:.

hypothesis versus prediction

Improve your writing now :

Download grammar ebooks.

It’s now more important than ever to develop a powerful writing style. After all, most communication takes place in reports, emails, and instant messages.

  • Understanding the Parts of Speech
  • Common Grammatical Mistakes
  • Developing a Powerful Writing Style
  • Rules on Punctuation
  • The Top 25 Grammatical Mistakes
  • The Awful Like Word
  • Build Your Vocabulary

More eBooks »

hypothesis versus prediction

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Stay Informed Group

Stay Informed Group

Stay informed with opportunities online

Hypothesis vs Prediction: Differences and Comparison

September 8, 2023 by Chukwuemeka Gabriel Leave a Comment

A hypothesis is a tentative conjecture that explains an observation, phenomenon or scientific problem that can be tested through observation, investigation or scientific experimentation.

A prediction is a statement of what will happen in the future. Based on the continuous recent outcome of an event, one can make a prediction on what will happen next.

A prediction is basically a forecast. It’s a statement of what will happen in the future based on collected data, evidence, or previous knowledge.

A hypothesis is an assumption considered to be true for the purpose of argument or investigation.

In the academic world, hypotheses and predictions are important elements of the scientific process. However, there are key differences between a hypothesis vs prediction and we will be looking at those differences in this article.

Hypothesis vs Prediction

What Is a Hypothesis?

A hypothesis is a tentative conjecture that explains a phenomenon, observation, or scientific problem that can be tested through scientific experimentation, observation or investigation.

It’s an assumption considered to be true for the purpose of argument or investigation. It’s a statement that gives an answer to a proposed question by using actual facts and research.

Researchers form hypotheses for the purpose of explaining a certain phenomenon. To prove their point, they make their hypotheses before starting their scientific experiments.

A hypothesis is an assumption that can be approved or disapproved. It’s considered a predictive statement for research and can be tested using scientific methods.

Also Read: Diploma vs Degree: Differences and Comparison

What Is a Prediction?

A prediction is a statement that describes what will happen in the future. Based on the continuous recent outcome of an event, one can make a prediction on what will happen next.

It’s a statement of what will happen in the future based on collected data, evidence, or previous knowledge.

Predictions can be a guess based on the collective data or instinct. If you have noticed an occurrence regularly, you are likely to make correct predictions about that occurrence.

For instance, if a mailman comes to your house each day at exactly 3 p.m. for five days straight, you might predict the time the mailman will come to your house the next day.

Your prediction that the mailman will arrive at your house at exactly 3 p.m. is based on your previous observations.

A prediction is considered an informed guess if it comes out from a person with the subject knowledge. Using accurate data and logical reasoning based on close observation leads to a probable prediction.

Hypothesis vs Prediction: Differences between Hypothesis and Prediction

It’s an educated guess for a scientific problem or phenomenon, while a prediction is a statement of what will happen in the future. In science, hypotheses are based on recent knowledge and understanding.

It’s an assumption considered to be true for the purpose of argument or investigation.

Predictions describe future events or outcomes and it’s a statement of what will happen in the future based on collected data, evidence, or previous knowledge.

Also Read: Meter vs Yard: Difference and Comparison

Hypothesis vs Prediction: Comparison Chart

 HypothesisPrediction
DefinitionA hypothesis is a tentative conjecture that explains a phenomenon, observation, or scientific problem that can be tested through scientific experimentation, observation or investigation.A prediction is a statement that tells what will happen in the future. It’s a statement of what will happen in the future based on collected data, evidence, or previous knowledge.
Based onFacts and evidencesBased on collected data, previous observation, knowledge, facts or evidences.
FormulationUsually takes a long timeGenerally takes comparatively short time
RelationshipStates casual correlation between variablesPredictions does not state correlation between variables
GuessEducated guess/sheer assumptionPure guess

Hypothesis vs Prediction: Similarities between Hypothesis and Prediction

Both hypothesis and prediction are statements defining the relationship between variables or the result of an event. A hypothesis and a prediction can be tested, verified and rejected or supported by evidence for the purpose of future research.

While predictions describe potential future events, hypotheses are statements describing potential cause-and-effect relationships.

Also Read: Genotype vs Phenotype: Differences and Comparison

Hypothesis vs Prediction: Tips on How to Write a Hypothesis

Here is how to write a hypothesis, with simple steps.

State your research question

Firstly, state your research questions orderly and clear. You should include an answer to the problem statement or research question in the hypothesis.

Next, create a topic-centric challenge once you have clearly understood the limitations of the study topic you selected. This will enable you to formulate a hypothesis and any other research you need to conduct for collecting data.

Conduct an inspection

Once you have successfully established your study, preliminary research should be carried out. Read through your previous hypothesis, any academic article, or data.

Make a three-dimensional theory

Every hypothesis often includes variables, so it’s important for you to create a correlation between your independent and dependent variables. You will do this by identifying both variables.

Write the first draft

Once you have everything set up, you can then compose your hypothesis.

Firstly, you start by writing the first draft and then write your research based on what you want it to be. Make sure that your independent and dependent variables vary, as well as the connection between them.

Hypothesis vs Prediction: Advantages of Hypothesis

Let’s explore a few advantages of using a hypothesis in scientific research.

  • A hypothesis can be tested and verified through scientific experimentation, observation, or investigation. It can be verified or rejected.
  • Hypothesis guides further research, as it suggests observations and scientific experiments that should be carried out.
  • Hypothesis encourages critical thinking and helps to identify cause-and-effect relationships.

Disadvantages of Hypothesis

  • Using a hypothesis can limit the scope. In reality, research findings may be limited by hypotheses.
  • Also, research findings may not be generalized if hypotheses are strictly applicable to a specific population.

Also Read: Seminar vs Workshop: Difference and Comparison

Hypothesis vs Prediction: Advantages of Prediction

  • Prediction can be used by both people and organizations to make future plans for specific events like weather or market trends.
  • Predictions help in decision-making. It provides insight into the potential results of various actions.
  • It helps in risk management. With predictions, stock market fluctuations or natural disasters can be foreseen.
  • It can provide assistance in allocating resources like inventory, budget, and workforce.

Disadvantages of Predictions

  • Predictions can be inaccurate and should not be totally relied on.
  • It can be influenced by bias, which can lead to inaccurate predictions.

Both hypothesis and prediction are statements defining the relationship between variables or the result of an event.

Based on the continuous recent outcome of an event, one can make a prediction on what will happen next. A hypothesis is an educated guess for a scientific problem or phenomenon, while a prediction is a statement of what will happen in the future.

Recommendations

  • Bussing vs Busing: Difference and Comparison
  • Pint vs Quart: Difference and Comparison
  • Superheroine vs Heroine: Difference and Comparison
  • Chilly vs Cool: Difference and Comparison
  • Family Medicine vs Internal Medicine: Difference and Comparison
  • Indeed : Hypothesis vs. Prediction: What’s the Difference?
  • Keydifferences : Difference Between Hypothesis and Prediction
  • Diffzy : Difference Between Hypothesis and Prediction
  • Testbooks : Difference Between Hypothesis and Prediction
  • Askanydifference : Hypothesis vs Prediction: Difference and Comparison

About Chukwuemeka Gabriel

Gabriel Chukwuemeka is a graduate of Physics; he loves Geography and has in-depth knowledge of Astrophysics. Gabriel is an ardent writer who writes for Stay Informed Group and enjoys looking at the world map when he is not writing.

Reader Interactions

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

10 Morning Meetings Greeting Ideas for Students

What are the 12 ivy league schools in 2024, examples of praying scripture for students, 21 top dental schools for international students 2024, negative effects of technology you should know, do scholarships count as income, 39 best law schools in uk and ranking, woods vs. forest: what is the difference, top 10 marketable careers in the world in 2024, what are the best architecture schools in the us, 15 best psychology schools in the world 2024, what are the best exercise science schools, 10 best medical schools in mexico in 2024, student loan forgiveness: how to obtain a student loan forgiveness, what are the 14 punctuation marks for effective english writing, top rated universities in canada with the highest acceptance rate, reasons why is education important all you need to know, most important languages to learn for more opportunities, top tips for first-year students, 10 tips to choosing an online college, what is the difference between going green and sustainability, how many nickels make a dollar all you need to know, highest paid military in the world in 2024 (top 10 countries), how to record a meeting on microsoft teams, 100 positive affirmations for students, 5 universities in cambridge massachusetts ma, 25 cheapest universities in usa for international students, why you should get your master’s in counselling, 32 best work from home companies that are legit, 15 google meet ideas for teachers, 25 short term courses with high salary, career opportunities for bcom students, how to become a pilot with or without a degree, how to become a home inspector, is a master’s in information technology worth it, how to build a career in digital marketing, is an online associate degree in health science for you, how to become a video game designer, what is a business lawyer, how to capitalize job titles, project management methodologies: definition, types and examples, career development plan: how to create a career plan, what does a film producer do, how to become a medical writer, how to become a music producer without school, 20 high paying part time jobs.

Difference Wiki

Hypothesis vs. Prediction: What's the Difference?

hypothesis versus prediction

Key Differences

Comparison chart, testability, impact of incorrectness, hypothesis and prediction definitions, what happens if a prediction is wrong, what is the primary purpose of a hypothesis, can one hypothesis lead to multiple predictions, are hypotheses limited to scientific research, can predictions be made without a hypothesis, how is a prediction related to a hypothesis, can a hypothesis be proven, what makes a good hypothesis, how is a hypothesis different from a theory, what should one do if the hypothesis is not supported by evidence, how are predictions useful in experiments, can a hypothesis change over time, why is it essential for a prediction to be specific, are predictions always about the future, can multiple predictions validate a single hypothesis, is a hypothesis always correct, are all predictions accurate, what role do hypotheses and predictions play in the scientific method, can a hypothesis be a question, can predictions be qualitative.

hypothesis versus prediction

Trending Comparisons

hypothesis versus prediction

Popular Comparisons

hypothesis versus prediction

New Comparisons

hypothesis versus prediction

hypothesis versus prediction

Difference Between | Descriptive Analysis and Comparisons

Search form, difference between hypothesis and prediction.

Key Difference: A Hypothesis is an uncertain explanation regarding a phenomenon or event. It is widely used as a base for conducting tests and the results of the tests determine the acceptance or rejection of the hypothesis. On the other hand, prediction is generally associated with the non-scientific guess. It defines the outcome of future events based on observation, experience and even a hypothesis. Hypothesis can also be defined in terms of prediction as a type of prediction which can be tested.

hypothesis versus prediction

Example of a hypothesis -

“I think that these leaves of the  plant became discolored due to lack of sunlight”

In this sentence, one can easily smell a sense of guess. However, this guess is a type of educated guess. Therefore, hypothesis is also known as an educated guess. This hypothesis can be tested by various scientific methods or further investigation.

Prediction is generally used in non-scientific world to define the outcome of future events. It is also referred to as forecast and in most of the cases it is not based on any experience or knowledge.

For example, if I will buy a lottery ticket, I will win today. Now, in this example, a prediction is made regarding the future. However, it cannot be tested before its actual occurrence. Therefore, it will be termed as a prediction.

hypothesis versus prediction

Comparison between Hypothesis and Prediction:

 

Definition

A Hypothesis is an uncertain explanation regarding a phenomenon or event. It is widely used as a base for conducting tests and the results of the tests determine the acceptance or rejection of the hypothesis.

Prediction is generally associated with a non-scientific guess. It defines the outcome of future events based on observation, experience and even a hypothesis.

Origin

The term derived from the Greek, hypotithenai meaning "to put under" or "to suppose."

From Latin praedict- 'made known beforehand, declared'.

Proving methodology

Various experiments can lead to various results. Thus, a hypothesis can be proved or rejected depending upon the method used by the scientists.

Predictions which are based on irrational notions can be tested on the occurrence of the associated phenomenon or occurrence.

A scientific prediction is based on the hypothesis and can be tested.

Supported by Reasoning

Yes

Depends

Example

Ultra violet light may cause skin cancer.

Leaves will change color when the next season arrives.

Image Courtesy: seminolestate.edu, nature.com

Mon, 10/26/2015 - 18:52

Add new comment

Copyright © 2024, Difference Between | Descriptive Analysis and Comparisons

David A. Rosenbaum Ph.D.

Hypotheses Versus Predictions

Hypotheses and predictions are not the same thing..

Posted January 12, 2018

  • Why Education Is Important
  • Take our ADHD Test
  • Find a Child Therapist

Science Springs

Blogs are not typically places where professors post views about arcane matters. But blogs have the advantage of providing places to convey quick messages that may be of interest to selected parties. I've written this blog to point students and others to a spot where a useful distinction is made that, as far as I know, hasn't been made before. The distinction concerns two words that are used interchangeably though they shouldn't be. The words are hypothesis (or hypotheses) and prediction (or predictions).

It's not uncommon to see these words swapped for each other willy-nilly, as in, "We sought to test the hypothesis that the two groups in our study would remember the same number of words," or "We sought to test the prediction that the two groups in our study would remember the same number of words." Indifference to the contrast in meaning between "hypothesis" and "prediction" is unfortunate, in my view, because "hypothesis" and "prediction" (or "hypotheses" and "predictions") mean very different things. A student proposing an experiment, or an already-graduated researcher doing the same, will have more gravitas if s/he states a hypothesis from which a prediction follows than if s/he proclaims a prediction from thin air.

Consider the prediction that the time for two balls to drop from the Tower Pisa will be the same if the two balls have different mass. This is the famous prediction tested (or allegedly tested) by Galileo. This experiment — one of the first in the history of science — was designed to test two contrasting predictions. One was that the time for the two balls to drop would be the same. The other was that the time for the heavier ball to drop would be shorter. (The third possibility, that the lighter ball would drop more quickly, was logically possible but not taken seriously.) The importance of the predictions came from the hypotheses on which they were based. Those hypotheses couldn't have been more different. One stemmed from Aristotle and had an entire system of assumptions about the world's basic elements, including the idea that motion requires a driving force, with the force being greater for a heavier object than a lighter one, in which case the heavier object would land first. The other hypothesis came from an entirely different conception which made no such assumptions, as crystallized (later) by Newton. It led to the prediction of equivalent drop times. Dropping two balls and seeing which, if either, landed first was a more important experiment if it was motivated by different hypotheses than if it was motivated by two different off-the-cuff predictions. Predictions can be ticked off by a monkey at a typewriter, so to speak. Anyone can list possible outcomes. That's not good (interesting) science.

Let me say this, then, to students or colleagues reading this (some of whom might be people to whom I give the URL for this blog): Be cognizant of the distinction between "hypotheses" and "predictions." Hypotheses are claims or educated guesses about the world or the part of it you are studying. Predictions are derived from hypotheses and define opportunities for seeing whether expected consequences of hypotheses are observed. Critically, if a prediction is confirmed — if the data agree with the prediction — you can say that the data are consistent with the prediction and, from that point onward you can also say that the data are consistent with the hypothesis that spawned the prediction. You can't say that the data prove the hypothesis, however. The reason is that any of an infinite number of other hypotheses might have caused the outcome you obtained. If you say that a given data pattern proves that such-and-such hypothesis is correct, you will be shot down, and rightly so, for any given data pattern can be explained by an infinite number of possible hypotheses. It's fine to say that the data you have are consistent with a hypothesis, and it's fine for you to say that a hypothesis is (or appears to be) wrong because the data you got are inconsistent with it. The latter outcome is the culmination of the hypothetico-deductive method, where you can say that a hypothesis is, or seems to be, incorrect if you have data that violates it, but you can never say that a hypothesis is right because you have data consistent with it; some other hypothesis might actually correspond to the true explanation of what you found. By creating hypotheses that lead to different predictions, you can see which prediction is not supported, and insofar as you can make progress by rejecting hypotheses, you can depersonalize your science by developing hypotheses that are worth disproving. The worth of a hypothesis will be judged by how resistant it is to attempts at disconfirmation over many years by many investigators using many methods.

Some final comments.... First, hypotheses don't predict; people do. You can say that a prediction arose from a hypothesis, but you can't say, or shouldn't say, that a hypothesis predicts something.

Second, beware of the admonition that hypotheses are weak if they predict no differences. Newtonian mechanics predicts no difference in the landing times of heavy and light objects dropped from the same height at the same time. The fact that Newtonian mechanics predicts no difference hardly means that Newtonian mechanics is lightweight. Instead, the prediction of no difference in landing times demands creation of extremely sensitive experiments. Anyone can get no difference with sloppy experiments. By contrast, getting no difference when a sophisticated hypothesis predicts none and when one has gone to great lengths to detect even the tiniest possible difference ... now that's good science.

Third and finally, according to the hypothesis that a blog about hypotheses versus predictions will prove informative, the prediction that follows is that those who read and heed this blog will exhibit less confusion about which term to use when. More important, they will exhibit greater gravitas and deeper thoughtfulness as they generate their hypotheses and subsequent predictions. I hope this blog will prove useful. Its utility will be judged by how long it takes to disconfirm the prediction I have just advanced.

David A. Rosenbaum Ph.D.

David A. Rosenbaum, Ph.D. , is a cognitive psychologist and a Distinguished Professor of Psychology at the University of California, Riverside.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Hypotheses versus predictions

Once upon a time there was a healthy scientific community on Twitter where we discussed science ideas and promoted our research. But alas, this community has broken apart since Twitter became X.

Here are a series of tweets I made in response to a poll by Josh Cashaback about whether we make distinctions between hypotheses and predictions in our papers and grants.

A hypothesis is a mechanism or theory that you are testing. It should be testable in a variety of ways and species. If it depends on your measurements, it is not a hypothesis. A prediction is how your specific experimental conditions and measurements will play out if the hypothesis is true. I.e when I do x, y will happen. A prediction alone is not a scientific hypothesis because the same results could be interpreted in different ways in terms of mechanism or theory. The hypothesis tells me WHY you made a particular prediction.

Ideally, when you’re proposing research you want to test your hypothesis in different ways too, this is why we need a body of literature across multiple labs, and not just a single study, to truly test a hypothesis.

  • Organizations
  • Planning & Activities
  • Product & Services
  • Structure & Systems
  • Career & Education
  • Entertainment
  • Fashion & Beauty
  • Political Institutions
  • SmartPhones
  • Protocols & Formats
  • Communication
  • Web Applications
  • Household Equipments
  • Career and Certifications
  • Diet & Fitness
  • Mathematics & Statistics
  • Processed Foods
  • Vegetables & Fruits
  • Difference Between Hypothesis and Prediction

• Categorized under Words | Difference Between Hypothesis and Prediction

Hypothesis vs Prediction

If you have every studied science in English, you will probably know the words “hypothesis” and “prediction.” Many people think that these two words mean the same thing. However, they actually have some small but important differences. The following article will define “hypothesis” and “prediction” and use some examples to help you understand how they are different and how to use them in your daily life.

A “hypothesis” (pronounced /haɪˈpɑːθəsɪs/) is “an idea or explanation of something that is based on a few known facts but that has not yet been proved to be true or correct” (countable noun). A synonym for this meaning of “hypothesis” is “theory.” You can “formulate/make a hypothesis,” “confirm a hypothesis,” “[have] a hypothesis about [something],” and “support a hypothesis.”

In a science experiment, for example, you may know a few facts already, like how baking soda and vinegar react when put together. Based on the facts you know, you can make a hypothesis about the result of the experiment. Then you do the experiment to try and confirm your hypothesis.

A second definition of “hypothesis” is “guesses and ideas that are not based on certain knowledge” (uncountable noun). In this sense, “hypothesis” means the same thing as “speculation.” For example, someone has been murdered but you do not know anything about the case: you might “engage in hypothesis” or “speculate” by guessing what might have happened, even though you know no details.

The plural of “hypothesis” is “hypotheses” (/haɪˈpɑːθəsiːz/).

A “prediction” (pronounced /prɪˈdɪkʃn/) is “a statement that says what you think will happen; the act of making such a statement” (countable and uncountable noun). Use this word in collocations such as: “[to make] a prediction,” “[somebody’s] prediction,” and “[to confirm] a prediction.”

Predictions are often used to discuss trends or patterns. For example, economists that study the stock market can make a prediction, based on current trends and past evidence, that a company’s stock will rise or fall.

Use both “hypothesis” and “prediction” to talk about future events that have not yet happened. But a hypothesis is often someone’s opinion. This opinion is often based only on partial evidence rather than a complete set of facts. You can also test a hypothesis. A hypothesis must include a “because.” A sample hypothesis could be written, “[This] will happen in the experiment, because of [these things] that I already know.” To use a more concrete example, you could hypothesize that “Baking soda and vinegar will cause the model volcano to erupt because these two compounds react very strongly when combined.”

Unlike a hypothesis, a prediction is based on reason and uses logic to think about what might happen in the future. Predictions are formed from past and present patterns and observations. A prediction statement may say, “[This] will happen because of [these things] that usually lead to [this result].” In a more concrete way, you could predict that “There will be a rainbow because it is raining and the sun is shining at the same time.”

A hypothesis can be correct or incorrect. That is, in science you could make a hypothesis but have it proven wrong. It still remains a hypothesis. A prediction, on the other hand, must be correct for it to remain a prediction. A correct hypothesis could also be called a prediction. Another way to think about these two concepts is thinking about a prediction as a guess, and a hypothesis as an explanation.

As you can see, there are quite a few ways of thinking about hypotheses and predictions. Hypotheses are most often used only in science. Predictions are usually used outside of science. A hypothesis can also be called an “educated guess.” A prediction can also be called a “forecast,” like with weather: Weathermen make predictions about tomorrow’s weather based on current weather patterns in a specific region.

  • Recent Posts
  • Difference Between Vascular Cambium and Cork Cambium - November 1, 2023
  • Difference Between DevOps and Developer - September 10, 2023
  • Difference Between Acute Gastritis and Chronic Gastritis - April 3, 2023

Sharing is caring!

Read More ESL Articles

Search differencebetween.net :.

Email This Post

  • Difference Between Inference And Prediction
  • Difference Between Null and Alternative Hypothesis
  • Difference Between Forecasting and Prediction
  • Difference Between Hypothesis and Aim

Cite APA 7 Ewan, D. (2016, June 8). Difference Between Hypothesis and Prediction. Difference Between Similar Terms and Objects. http://www.differencebetween.net/language/words-language/difference-between-hypothesis-and-prediction-2/. MLA 8 Ewan, Dart. "Difference Between Hypothesis and Prediction." Difference Between Similar Terms and Objects, 8 June, 2016, http://www.differencebetween.net/language/words-language/difference-between-hypothesis-and-prediction-2/.

Leave a Response

Name ( required )

Email ( required )

Please note: comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.

Notify me of followup comments via e-mail

References :

Advertisments, more in 'words'.

  • Difference Between Center and Centre
  • Difference Between Lodge and Resort
  • Difference Between Authoritarian and Fascism
  • Difference Between Advocate and Barrister
  • Difference Between Advocacy and Lobbying

Top Difference Betweens

Get new comparisons in your inbox:, most emailed comparisons, editor's picks.

  • Difference Between MAC and IP Address
  • Difference Between Platinum and White Gold
  • Difference Between Civil and Criminal Law
  • Difference Between GRE and GMAT
  • Difference Between Immigrants and Refugees
  • Difference Between DNS and DHCP
  • Difference Between Computer Engineering and Computer Science
  • Difference Between Men and Women
  • Difference Between Book value and Market value
  • Difference Between Red and White wine
  • Difference Between Depreciation and Amortization
  • Difference Between Bank and Credit Union
  • Difference Between White Eggs and Brown Eggs

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 17 August 2024

Temporal regularities shape perceptual decisions and striatal dopamine signals

  • Matthias Fritsche   ORCID: orcid.org/0000-0001-5835-9057 1 ,
  • Antara Majumdar 1 ,
  • Lauren Strickland 1 , 2 ,
  • Samuel Liebana Garcia 1 ,
  • Rafal Bogacz   ORCID: orcid.org/0000-0002-8994-1661 3 &
  • Armin Lak   ORCID: orcid.org/0000-0003-1926-5458 1  

Nature Communications volume  15 , Article number:  7093 ( 2024 ) Cite this article

44 Accesses

7 Altmetric

Metrics details

  • Learning algorithms
  • Neural circuits

Perceptual decisions should depend on sensory evidence. However, such decisions are also influenced by past choices and outcomes. These choice history biases may reflect advantageous strategies to exploit temporal regularities of natural environments. However, it is unclear whether and how observers can adapt their choice history biases to different temporal regularities, to exploit the multitude of temporal correlations that exist in nature. Here, we show that male mice adapt their perceptual choice history biases to different temporal regularities of visual stimuli. This adaptation was slow, evolving over hundreds of trials across several days. It occurred alongside a fast non-adaptive choice history bias, limited to a few trials. Both fast and slow trial history effects are well captured by a normative reinforcement learning algorithm with multi-trial belief states, comprising both current trial sensory and previous trial memory states. We demonstrate that dorsal striatal dopamine tracks predictions of the model and behavior, suggesting that striatal dopamine reports reward predictions associated with adaptive choice history biases. Our results reveal the adaptive nature of perceptual choice history biases and shed light on their underlying computational principles and neural correlates.

Similar content being viewed by others

hypothesis versus prediction

The impact of learning on perceptual decisions and its implication for speed-accuracy tradeoffs

hypothesis versus prediction

Dopamine-independent effect of rewards on choices through hidden-state inference

hypothesis versus prediction

A neural mechanism for conserved value computations integrating information and rewards

Introduction.

Accurate perceptual decision-making should rely on currently available sensory evidence. However, perceptual decisions are also influenced by factors beyond current sensory evidence, such as past choices and outcomes. These choice history biases are ubiquitous across species and sensory modalities 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 . While maladaptive in standard randomized psychophysical experiments, choice history biases could be advantageous in natural environments that exhibit temporal regularities 15 . Crucially, however, natural environments exhibit a multitude of different temporal regularities. For instance, a traffic light that recently turned green can be expected to remain green for a while, allowing a driver to maintain speed while passing a junction. Conversely, a yellow traffic light can rapidly change to red, thus prompting a driver to decelerate. The exploitation of these various temporal regularities therefore necessitates adaptation of choices to such sequential patterns. However, the behavioral signatures, computational principles, and neural mechanisms underlying such adaptations remain unclear.

Previous studies have demonstrated that humans and rats can adapt their perceptual choice history biases to different temporal regularities 16 , 17 , 18 . While mice exhibit flexible visual decision-making 3 , 19 , 20 , 21 , 22 , 23 , it is not known whether they can adapt their choice history biases to temporal regularities of the environment. Moreover, the neural mechanisms underlying such adaptive perceptual choice history biases remain unknown. Midbrain dopamine neurons, and the corresponding dopamine release in the striatum, play key roles in learning 24 , 25 , 26 . Dopamine signals have been shown to shape the tendency to repeat previously rewarded choices, both in perceptual and value-based decision tasks 21 , 27 , 28 , 29 , 30 . Yet, the role of striatal dopamine signals in the adaptation to temporal regularities during perceptual decision-making remains unknown.

We trained mice in visual decision-making tasks involving different trial-by-trial temporal regularities, with stimuli likely repeating, alternating, or varying randomly across trials. We show that mice can adapt their perceptual choice history biases to these different temporal regularities to facilitate successful visually-guided decisions. This adaptation was slow, evolving over hundreds of trials across several days. It occurred alongside a fast non-adaptive choice history bias, which was limited to a few trials and not influenced by temporal regularities. We show that these fast and slow trial history effects are well captured by a normative reinforcement learning algorithm with multi-trial belief states, comprising both current trial sensory and previous trial memory states. We subsequently demonstrate signatures of this learning in mice that are naive to the manipulation of temporal regularities, suggesting that this type of learning is a general phenomenon occurring in perceptual decision-making. Finally, we establish that dopamine release in the dorsal striatum follows predictions of the reinforcement learning model, exhibiting key signatures of learning guided by multi-trial belief states. Together, our results demonstrate the adaptive nature of perceptual choice history biases as well as their neural correlates and cast these biases as the result of a continual learning process to facilitate decision-making under uncertainty.

Mice adapt perceptual choice history bias to temporal regularities

We trained male mice ( n  = 10) in a visual decision-making task (Fig.  1a ). In each trial, we presented a grating patch on the left or right side of a computer screen and mice indicated the grating location by steering a wheel with their forepaws, receiving water reward for correct responses. After mice reached expert proficiency on randomized stimulus sequences, we systematically manipulated the trial-by-trial transition probabilities between successive stimuli across different days (Fig.  1b ). In addition to neutral stimulus sequences in which stimulus location was chosen at random [p(“Repeat”) = 0.5], we exposed mice to a repeating environment in which stimulus locations were likely repeated across successive trials [p(“Repeat”) = 0.8], and an alternating environment in which stimulus locations likely switched from the previous trial [p(“Repeat”) = 0.2]. Consequently, in the repeating and alternating environments, the location of the current stimulus was partially predictable given the knowledge of the previous trial. Mice successfully mastered the task, exhibiting high sensitivity to visual stimuli (Fig.  1c ; choice accuracies—neutral: 77.91% ± 0.70; alternating: 79.31% ± 0.75; repeating: 79.73% ± 0.74, mean ± SEM). In order to examine whether mice’s decisions were influenced by the trial history, we conditioned current choices on the previous trial’s successful choice direction (Fig.  1d ). In the neutral environment, mice showed a subtle but consistent tendency to repeat the previous choice (t(9) = 2.31, p  = 0.046, two-sided t-test), in line with previous studies 20 , 21 . Importantly, this choice repetition bias was increased in the repeating environment and decreased in the alternating environment, appropriate to exploit the temporal regularities of stimuli (Fig.  1e ; ΔP(“Right”)—Repeating vs. Neutral: t(9) = 2.89, p  = 0.018; Alternating vs. Neutral: t(9) = −3.65, p  = 0.005, two-sided paired t -tests). Furthermore, the influence of the previous choice was most pronounced when the current stimulus contrast was low, suggesting that mice particularly relied on learned predictions when they were perceptually uncertain. Importantly, exploiting the predictability in the repeating and alternating environments enabled mice to increase their choice accuracy relative to the neutral environment, in which stimuli were not predictable (Fig.  1f ; ΔAccuracy on the most difficult trials [0 and 6.25% contrast]—Repeating vs. Neutral: t(9) = 5.28, p  = 0.0005; Alternating vs. Neutral: t(9) = 2.58, p  = 0.03, two-sided paired t-tests; see Supplementary Fig.  1j for all trials). These findings indicate that mice adapt their reliance on the previous choice to the temporal regularity of the stimulus sequence, thereby improving their perceptual decisions.

figure 1

a Schematic of the two-alternative visual decision-making task. Head-fixed mice reported the location (left/right) of gratings with varying contrasts by steering a wheel with their forepaws, receiving water reward for correct responses. Adapted from ref. 21 . https://creativecommons.org/licenses/by/4.0 /. b Stimulus sequences of left and right grating presentations followed distinct transition probabilities (left column), interleaved across different days. In the neutral environment, stimulus location was determined randomly (top). In the repeating and alternating environments, stimulus locations were likely repeated (middle) or alternated (bottom) across successive trials. Right column shows example sequences in each environment. Different shades of green and pink denote different stimulus contrasts, varying randomly across trials. c Mice exhibit expert performance, demonstrated by steep psychometric curves with near-perfect performance for easy (high contrast) stimuli (data pooled across environments). Negative and positive contrasts denote stimuli on the left and right sides, and the y-axis denotes the probability of a rightward choice. Black data points show the group average and gray lines indicate individual mice ( n  = 10 in all panels). The black line indicates the best-fitting probabilistic choice model (see “Methods”). Error bars in all panels depict SEMs. d Psychometric curves conditioned on the previous successful choice direction “left” (green) or “right” (pink). In the repeating environment, mice exhibit a bias to repeat their previous successful choice as indicated by a higher probability to respond “right” when the previous response was “right” rather than “left”. Data points show group averages. Lines show predictions of the probabilistic choice model. e Difference between choice probabilities conditioned on the previous trial’s successful response (right minus left, gray area in ( d )). Positive y values indicate a tendency to repeat the previous choice. Data points show group averages. Lines show predictions by the probabilistic choice model. f Choice accuracy on low contrast trials (0 and 6.25% contrast) in three different environments. Mice exhibited a gain in performance in the repeating and alternating over the neutral environment. Gray and black lines depict individual mice and the group average. For choice accuracy on all trials see Supplementary Fig.  1j . * p  < 0.05, ** p  < 0.01, two-sided paired t-tests. Source data are provided as a Source Data file.

Adaptation of history bias develops over multiple days and is limited to the previous trial

To further quantify choice history biases beyond the previous trial, we fit a probabilistic choice regression model with history kernels to choices in each environment (see “Methods” and Supplementary Fig.  1 for details and parameter recovery analysis). The history kernels associated with the past seven successful choices confirmed that mice adapted the weight of the previous (i.e., 1-back) choice to different temporal regularities (Fig.  2a and b ). In contrast, the influence of choices made more than one trial ago (2- to 7-back) did not differ across environments, but steadily decayed from an initial attraction by the 2-back choice towards zero for choices made further in the past, generally ceasing to be significantly different from baseline after 5 trials (Fig.  2a ; two-sided permutation tests, Bonferroni-corrected for multiple comparisons). Surprisingly, the relatively small 1-back choice weight in the neutral environment entailed that mice were more likely to repeat their 2-back choice compared to the more recent 1-back choice when acting on random stimulus sequences (Fig.  2a , green line; t(9) = −4.08, p  = 0.003, two-sided t-test of 1- vs 2-back choice weights; see also Supplementary Fig.  2 for individual mice). In addition to the probabilistic choice regression model, we also confirmed this phenomenon using a model-free analysis (Supplementary Fig.  2c , t(9) = −3.49, p  = 0.007, two-sided t-test of model-free 1- vs 2-back choice repetition probability). We will seek to explain this phenomenon with normative learning principles below.

figure 2

a History kernels comprising the past seven successful choice weights of the probabilistic choice model (“Methods”; see Supplementary Fig.  1 for the full set of regression weights and parameter recovery analysis). While mice are biased by several past choices, only the previous (1-back) choice weight differs across environments. Error bars in all panels depict SEMs. Dots parallel to x-axis indicate weights significantly different from baseline, two-sided permutation test based on shuffled trial history, Bonferroni-corrected, p  < 0.007. Sample size was n  = 10 mice in panels a, b and h. b 1-back successful choice weight across environments for each mouse (gray lines) and group average (black). One-sided t-tests; repeating vs alternating: t(9) = 3.11, p  = 0.006; repeating vs neutral: t(9) = 1.97, p  = 0.04; alternating vs neutral: t(9) = −2.76, p  = 0.01. c 1-back successful choice weights estimated on the first, second, and third day of alternating sessions following a neutral session (n = 6 mice). d Same as in ( c ), but for repeating sessions (n = 6 mice). e Choice history kernels for neutral sessions conditioned on the temporal regularity experienced on the preceding day (solid/circle: repeating; dashed/square: alternating; n = 9 mice in panels e, f and g). f 1-back successful choice weight of neutral sessions preceded by repeating (circle) or alternating (square) sessions in each mouse (gray lines) and across the population (green line). Stars denote results of one and two-sided t-tests (see main text). g Difference in choice probabilities conditioned on the previous trial’s successful choice in neutral sessions preceded by repeating (solid/circle) or alternating (dashed/square) sessions. The differential impact of the previous regularity is most pronounced when current contrast is low. h Difference in choice probabilities conditioned on the previous trial’s successful response split according to whether the previous trial’s stimulus contrast was high (black) or low (gray). Mice are more likely to repeat the previous choice when it was based on a low rather than high contrast stimulus. * p  < 0.05, ** p  < 0.01. Source data are provided as a Source Data file.

In contrast to past successful choices, we found that mice tended to repeat past incorrect choices largely irrespective of the environment statistic and the temporal lag, pointing towards a long-term repetition of errors (Supplementary Fig.  1d ). We hypothesized that this repetition of errors was due to prolonged periods of task disengagement in which mice largely ignored visual stimuli and instead repeatedly performed the same choice. We investigated this hypothesis by identifying engaged and disengaged trials using a modeling framework based on hidden Markov Models 31 (HMM, see “Methods” and Supplementary Fig.  3a–f ). When applying the choice history analysis separately to engaged and disengaged trials, we indeed found that mice repeated the previous incorrect choice when they were disengaged but tended to alternate after errors in the engaged state. Consistent with increased repetition of incorrect choices in the disengaged state, mice became more likely to repeat their previous choice when they committed several errors in sequence, indicative of episodes of task disengagement (Supplementary Fig.  3g ). These findings support the hypothesis that long-term choice repetition after errors is strongly driven by task disengagement. Due to the low number of “engaged” error trials in the task (14% ± 0.1 of all trials, mean ± SEM), we focused on successful choice history kernels in the remainder of our analyses.

Overall, our findings indicate that although mice’s choice history biases extend over several past trials, mice only adapt the influence of the previous successful choice to the temporal regularities of the stimulus sequence.

We next sought to investigate how rapidly mice adapted their previous choice weight to temporal regularities. We fit the probabilistic choice model separately to the first, second, and third day of alternating or repeating sessions following a neutral session. Mice slowly and gradually shifted their 1-back weight across days, increasingly alternating or repeating their previous choice with each day in the alternating and repeating environments, respectively (Fig.  2c and d ; F(2,8) = 4.89, p  = 0.04, repeated-measures ANOVA). To further corroborate this slow adaptation, we analyzed neutral sessions that were preceded by either a repeating or alternating session (Fig.  2e ). Consistent with slow adaptation, mice continued to weigh their previous choice according to the temporal regularity they experienced on the previous day, despite the current stimulus sequences being random (Fig.  2f ). That is, mice were biased to repeat their previous choice in a neutral session preceded by a repeating session (t(8) = 2.69, p  = 0.01, one-sided t-test) and biased to alternate their previous choice in a neutral session preceded by an alternating session (t(8) = −1.93, p  = 0.045, one-sided t-test; post-repeating vs. post-alternating: t(8) = 4.18; p  = 0.003, two-sided paired t-test). Moreover, mice most strongly followed the regularity of the previous day when the current contrast was low (Fig.  2g ), suggesting that mice integrate current sensory evidence with a flexible, but slowly acquired prediction based on past experience. Unlike the 1-back choice weight, the 2-back choice weight did not depend on previous exposure to temporal regularities (Supplementary Fig.  4 ). Lastly, choice repetition was also modulated by the previous trial’s stimulus contrast, being more pronounced after successful choices based on low- rather than high contrast stimuli (t(9) = 4.38, p  = 0.002, two-sided paired t-test; Fig.  2h ), similar to previous studies 9 , 21 , 32 . This modulation by past stimulus contrasts gradually decayed over n-back trials (Supplementary Fig.  5 ). We will seek to explain this phenomenon with learning principles below.

In summary, the results show that mice’s visual decisions are biased towards the recent choice history—a bias that decays over the past seven trials. In contrast to this fast-biasing effect of the most recent choices, mice slowly adapted their 1-back choice weight to the temporal regularities of the stimulus sequence over the course of hundreds of trials. Finally, even in the neutral environment mice exhibited a conspicuous reliance on their 1-back choice, repeating it less than the temporally distant 2-back choice.

Multi-trial reinforcement learning explains choice history biases

We next asked whether our findings could be explained by a common underlying computational principle. It has been proposed that even well-trained perceptual decision-makers exhibit choice history biases due to continual updating of choice values 21 , 32 . In this framework, an agent performs the visual decision-making task by combining its belief about the current stimulus (perception) with stored values for perception-choice pairs, which can be formalized as a partially observable Markov decision process (POMDP 33 , 34 ; Fig.  3a ; for a detailed description see “Methods”). In brief, on a given trial the agent estimates the probabilities P L and P R , denoting the probabilistic belief that the stimulus is on the left or right side of the screen (Fig.  3e , dark blue). These estimates are stochastic: they vary across trials even if these trials involve the same stimulus contrast. The agent then multiplies these probabilities with stored values q choice,perception that describe the average previously obtained reward when making a certain choice (left/right) upon observing a particular perceptual state (left/right stimulus). This yields expected values Q L and Q R , describing the expected reward for either choice:

figure 3

a A normative model of decision-making and learning. The single-trial belief state model performs the task by combining its belief about the location of the current visual stimulus (top; dark blue) with stored perception-choice values. The model iteratively updates these values by means of a weighted prediction error (see “Methods”). b Choice history kernels of the best fitting single-trial belief state model. Iterative updating of perception-choice values leads to choice repetition, but the model cannot produce a 1- to 2-back increase in neutral choice weights (green), nor the 1-back choice alternation in the alternating environment (orange). Shaded lines and regions in all panels depict empirical means ± SEMs. c Choice history kernels in neutral sessions following a repeating (solid line) or alternating session (dashed line). Unlike mice, the single-trial model does not exhibit a carryover of 1-back choice weights adapted to the previous regularity. d The multi-trial belief state model not only considers its belief about the current visual stimulus (top; dark blue) but additionally relies on its memory of the previous trial’s successful choice (top; pink), together with a separate set of memory-choice values. e Example belief state. The agent has a strong belief that the current stimulus is on the right side and remembers that the previous trial’s rewarded choice was left. Eye symbol used with permission from the Twemoji project. https://creativecommons.org/licenses/by/4.0/ f Choice history kernels of the best fitting multi-trial belief state model. The model captures the characteristic 1- to 2-back increase in neutral choice weights (green), and the 1-back choice alternation in the alternating environment (orange). g Choice history kernels in neutral sessions following a repeating (solid line) or alternating session (dashed line). The multi-trial model exhibits a carryover of adapted choice history weights. h Psychometric curves of the mice (grey), and the multi-trial model (black). The model accurately captures the mice’s dependence of choice (y-axis) on current contrast ( x- axis). i Difference in choice probabilities conditioned on the previous trial’s successful response, split according to whether the previous trial’s stimulus contrast was high (black) or low (gray). The multi-trial model (lines) captures the mice’s increased tendency to repeat the previous choice when it was based on a low rather than high contrast stimulus (black and gray shaded regions). Source data are provided as a Source Data file.

The agent probabilistically chooses based on these expected values and a softmax decision rule. Following the choice C (left or right), the agent observes the outcome r and computes a prediction error δ by comparing the outcome to the expected value of the chosen option, Q C : \({{\rm{\delta }}}={{\rm{r}}}-{{{\rm{Q}}}}_{{{\rm{C}}}}\) . This prediction error is then used to update the values associated with the chosen option \({q}_{C,{P}_{L}}\) and \({q}_{C,{P}_{R}}\) by weighing the prediction error with a learning rate \(\alpha\) and the belief P L and P R :

The above agent has four free parameters: a sensory noise parameter, governing the variability of stimulus estimates P L and P R , decision noise (softmax temperature), as well as learning rates for positive and negative prediction errors ( \({\alpha }^{+}\) and \({\alpha }^{-}\) ). We refer to this agent as the single-trial POMDP RL agent, for making a choice the agent only considers its belief about the current trial’s stimulus (perception, P L and P R ) and the associated stored perception-choice values q choice,perception . This agent exhibits several notable features 21 , 32 . First, due to the trial-by-trial updating of perception-choice values, it learns the visual decision-making task from scratch (Supplementary Fig.  6a ). Second, the trial-by-trial updating of perception-choice values introduces history dependencies, biasing the agent to repeat recently rewarded choices (Fig.  3b ). Finally, the agent recapitulates the dependence of the choice bias on the difficulty of the previous decision: The agent is most likely to repeat a previous successful choice when it was based on a low contrast stimulus, associated with low decision confidence (Fig.  2h ; Supplementary Fig.  6b ). Crucially however, the agent does not explain choice history biases across different temporal regularities. In particular, when confronted with neutral (random) stimulus sequences, it produces a monotonically decaying history kernel, instead of the observed increase of choice weights from 1- to 2-back (Fig.  3b , green). Furthermore, in the alternating environment, the agent fails to capture the alternation tendency in the 1-back choice weight. (Fig.  3b , orange). Due to this failure to adapt to the alternating temporal regularity, the model underestimates the mice’s behavioral choice accuracy in the alternating environment (Supplementary Fig.  6c ). Finally when transitioning from a repeating or alternating environment into the neutral environment, the agent exhibits no carryover of previously acquired history dependencies, unlike the substantial carryover seen in the empirical data (Fig.  3c ).

Having established that the single-trial POMDP RL agent is unable to account for the mice’s adaptation to temporal regularities, we considered a simple extension to this model. In particular, we assumed that when determining the value of the current choice options, the agent not only considers its belief about the current stimulus (perception, P L , and P R ) and the associated perception-choice values (Fig.  3d and e , dark blue), but additionally relies on its memory of the previous trial’s successful choice (Fig.  3d and e , pink; M L and M R ). That is, analogous to computing choice values from a belief about the current stimulus, the agent combines its memory of the previous trial’s rewarded choice with a separate set of memory-choice values ( q choice,memory ; Fig.  3d , pink). These memory-choice values describe the expected reward of a particular current choice (left/right) depending on the rewarded choice of the previous trial. The agent thus computes the expected reward for current left and right choice options, Q L and Q R , as the sum of perception-based and memory-based reward expectations:

where P and M represent perceptual and memory belief states, respectively. Following the choice and outcome, the agent updates the perception-choice and memory-choice values associated with the selected choice, using the same learning rate \(\alpha\) :

We refer to this agent as the multi-trial POMDP RL agent, as it considers both its belief about the current trial’s visual stimulus (perception, P L and P R ) and its memory of the previous trial’s successful choice ( M L and M R ) when making a choice in the current trial. Compared to the single-trial agent, the multi-trial agent has only one additional parameter (memory strength), controlling how strongly the agent relies on its memory of the previous choice for current decisions. Similar to the single-trial agent, the multi-trial agent captured the mice’s dependence on the current and previous stimulus contrasts (Fig.  3h and i ). Strikingly, however, the agent was also able to capture the pattern of choice history biases across different temporal regularities. First, for random stimulus sequences, the agent produced the distinctive decrease in 1- compared to 2-back choice weights (Fig.  3f , green). Second, the agent accurately captured the mice’s tendency to alternate the previous choice in the alternating environment (Fig.  3f , orange), and repeat the previous choice in the repeating environment (Fig.  3f , blue), while maintaining similar 2- to 7-back choice weights. Due to its ability to adapt to the alternating regularity, the agent successfully captured the mice’s higher empirical choice accuracy in the alternating compared to the neutral environment (Supplementary Fig.  6c ). Finally, the agent exhibited a substantial carryover of adapted 1-back choice weights when transitioning from the repeating or alternating into the neutral environment (Fig.  3g ). Accordingly, the multi-trial POMDP RL model provided a significantly better fit to the mice’s choice data than the single-trial model (F(1,41) = 12.63, p  = 0.001, F -test; ΔBIC = 8.52). Extending the multi-trial POMDP RL model with exponentially decaying memory, not limited to the 1-back trial, did not further improve the model fit (F(1,40) = 0.37, p  = 0.55, F -test; ΔBIC = −3.41; see “Methods” and Supplementary Fig.  7 ). Importantly, the fit of the multi-trial model was achieved by fitting a single set of parameters to the data of all three temporal regularities, suggesting that the empirical differences in choice history biases arose from a fixed set of learning rules that created different choice dynamics depending on the regularity of the input sequence.

In order to better understand how the multi-trial agent was able to capture our findings, we inspected the trajectories of perception-choice and memory-choice values across the different environments. We found that perception-choice values underwent strong trial-by-trial fluctuations, but remained overall stable across different temporal regularities (Supplementary Fig.  8a ). In contrast, memory-choice values changed slowly over the course of hundreds of trials, and diverged in the different environments (Supplementary Fig.  8b and c ). The slow change in memory-choice values was driven by a subtle reliance on memory, relative to perception, when deciding about the current choice, thereby leading to small updates of memory-choice values. Notably, the updating of perception-choice values is relatively rigid, promoting a tendency to repeat successful choices regardless of the temporal regularity of the environment. Conversely, memory-choice values can grow flexibly to either facilitate or counteract the repetition tendency (Supplementary Fig.  8c ). Since memory only comprised the previous trial’s choice, this facilitating or counteracting effect was limited to the 1-back choice weight. Importantly, in the neutral environment, any history dependency attenuates task performance. In this environment, the multi-trial agent used its memory to counteract the 1-back repetition bias introduced by the updating of perception-choice values, leading to a decreased 1- relative to 2-back choice weight. The reliance on memory thus allowed the agent to become more neutral in its reliance on the 1-back choice, thereby increasing task performance. Finally, the slow trajectory of memory choice values offers an explanation for why mice did not develop a pronounced 1-back repetition bias in the repeating environment (Fig.  3f , blue). Both neutral and alternating environments discourage the model from repeating the 1-back choice, promoting memory-choice values in favor of alternations. Since repeating sessions were interleaved with neutral and alternating sessions, the model therefore was not given enough time to adapt its memory-choice values to produce strong 1-back repetition biases during repeating sessions, resulting in a muted repetition bias, similar to the empirically observed pattern in mice.

Together, our results demonstrate that mice’s choice history biases and their slow adaptation to different temporal regularities can be explained by a normative reinforcement learning algorithm with multi-trial belief states, comprising both current trial sensory and previous trial memory states.

Mice naive to temporal regularities exhibit signatures of multi-trial learning

Mice exhibited a key signature of the multi-trial POMDP RL agent, displaying a decreased tendency to repeat the 1- relative to 2-back choice when acting on completely random stimulus sequences. We wondered whether this reliance on memory was driven by successful learning of different temporal regularities (Fig.  2a ), or whether it is a general phenomenon observed in animals that did not experience such temporal regularities. To investigate this question, we analyzed publicly available choice data of 99 naive mice, which had not experienced repeating or alternating regularities, and were trained to expertize with random stimulus sequences in an experimental setup similar to ours 20 (Fig.  4a–c ). Similar to the mice of the current study, mice more strongly repeated their previous successful choice when the previous contrast was low rather than high (Fig.  4b ; t(98) = 10.17, p  < 2.2e-16, two-sided paired t-test), which is an important feature of confidence-weighted updating of choice values (see Fig.  3i ). Crucially, while mice were biased to repeat successful choices of the recent past, they also exhibited a reduced 1- compared to 2-back choice repetition bias, replicating this signature of multi-trial learning in mice naive to temporal regularities (Fig.  4c ; t(98) = −3.47, p  = 0.0008, two-sided paired t-test). Importantly, our analysis was restricted to the neutral sessions of the International Brain Laboratory’s (IBL) dataset, i.e., before blocks with different stimulus probabilities were introduced (see “Methods” for details). Conversely, in the IBL’s main task, in which stimuli were more frequently presented on one side in blocks of 20 to 100 trials, mice exhibited a monotonically decreasing choice history kernel with a larger 1- compared to 2-back weight (Supplementary Fig.  9 ). This is likely driven by the high stimulus repetition probability within each block lasting for dozens of trials, strongly encouraging mice to repeat the previous choice 35 .

figure 4

a Mice performed a visual decision-making task, similar to the one of the current study 20 ( n  = 99, ( a – c )). We exclusively analyzed sessions in which mice had mastered the task with random stimulus sequences. Mice showed high sensitivity to the current stimulus contrast. Gray lines show individual mice, whereas black data points show the group average. See also Fig.  1c . b Difference in choice probabilities conditioned on the previous trial’s successful response, split according to whether the previous trial’s stimulus contrast was high (black) or low (gray). Mice are more likely to repeat the previous choice when it was based on a low rather than high contrast stimulus. In all empirical panels ( b , c , f , and i ) data points show the group average and error bars depict SEMs. c History kernel comprising the past seven successful choice weights of the probabilistic choice model (“Methods”). While naive mice generally tended to repeat their most recent choices, they exhibited a reduced 1- relative to 2-back choice weight (see inset, two-sided paired t-test, t(98) = −3.47, p  = 0.0008), which is a key signature of our multi-trial POMDP RL model. d Schematic of how a monotonically decaying choice repetition bias (blue) together with a short-lived sensory adaptation bias (red) could lead to the empirically observed choice history kernel (striped), explaining both the 1- to 2-back increase in choice weights and the modulation of choice weights by previous contrast. e Schematic of the altered visual decision-making task. Stimuli were presented at one of four spatial locations, unlike the main task with two stimulus locations. Mice had to report whether the stimulus was on the left or right side of the screen. Successive stimuli could thus be presented at the same (green arrow) or different spatial location (pink arrow), even when those stimuli required the same choice (here left). f Mice ( n  = 5) exhibit expert-level task performance. We expressed the probability of a rightward decision (y-axis) as a function of the signed stimulus contrast (x-axis). Positive contrasts denote stimuli on the right side. Mice showed a high sensitivity to visual stimuli, regardless of whether stimuli were presented low (yellow) or high (purple) in their visual field. g Predictions of the spatially-specific sensory adaptation hypothesis. Mice should be less likely to repeat the previous successful choice ( y- axis) when successive stimuli are presented at the same spatial location ( x- axis, green), rather than at the different spatial location in the same hemifield ( x- axis, pink). This effect should be particularly pronounced when the previous stimulus had high contrast (left subpanel), serving as a potent adapter. h Predictions of the POMDP RL model with confidence-weighted value updates. Mice should be more likely to repeat a previously successful choice when it was based on a low rather than high contrast stimulus (left vs right subpanels), but this effect should not vary with changes in spatial location ( x- axis, green vs pink). i The mice’s ( n  = 5) choice repetition probabilities are in line with the predictions of the POMDP RL model, and inconsistent with spatially-specific sensory adaptation. Grey thin lines depict individual mice. Repeated-measures ANOVA. ** p  < 0.01, *** p  < 0.001. Source data are provided as a Source Data file.

Overall, our analysis shows that even without exposure to biased temporal regularities, mice exhibit a key signature of multi-trial learning, suggesting that learning based on multi-trial belief states is a general strategy in visual decision-making of mice.

Reduction in previous choice weight is not due to sensory adaptation

The decreased tendency to repeat the 1- relative to 2-back choice and the increased probability to repeat decisions based on low sensory evidence are key signatures of the multi-trial POMDP RL agent. However, both phenomena could also be the signature of spatially-specific sensory adaptation. It is well known that the visual system adapts to visual input, typically leading to reduced neural responses for repeated or prolonged stimulus presentations 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , and inducing repulsive biases in behavior 45 , 46 , 47 , 48 , 49 . Neural adaptation to high contrast is believed to reduce perceptual sensitivity to subsequently presented low contrast stimuli 50 , 51 . If high contrast stimuli indeed reduce the perceptual sensitivity to subsequent stimuli, this may explain the reduced bias to report a grating presented at the same location as the previous grating (i.e., repeating the previous successful choice), and would entail a particularly strong reduction following gratings with the highest contrast. Together with a monotonically decaying choice repetition bias (Fig.  4d , blue), a short-lived sensory adaptation bias (Fig.  4d , red) may thus mimic both signatures supporting multi-trial reinforcement learning (Fig.  4d , striped). In order to investigate this possibility, we exploited a crucial necessary condition of the adaptation hypothesis, namely that sensory adaptation needs to be spatially-specific, reducing perceptual sensitivity for subsequent stimuli presented at the same location as the previous stimulus, but less strongly when the stimulus location changes. To test this, we performed a new experiment in which we presented small gratings in one of the four corners of the screen. Mice reported whether the stimulus was on the left or right side of the screen, regardless of its altitude. This experiment thereby allowed us to manipulate whether successive stimuli were presented at the same or different spatial location (lower and upper visual field), even when those stimuli required the same choice (left or right; Fig.  4e ). Crucially, the POMDP RL framework, which posits confidence-dependent updating of choice values, predicts an effect of previous contrast, with an increased probability to repeat successful choices based on low sensory evidence (see Fig.  3h ), but no influence of whether the previous and current stimuli were presented at the same or different spatial locations (Fig.  4h ). Conversely, spatially-specific sensory adaptation predicts that the probability to repeat the previous successful choice is reduced when previous and current stimuli are presented at the same location, and relatively increased when the location changes, due to a release from adaptation—an effect that should be particularly pronounced when the previous contrast was high (Fig.  4g ). Mice ( n  = 5) successfully reported the horizontal location of the current stimulus (left or right), both when stimuli were presented at a low or high vertical location (Fig.  4f ). Furthermore, we verified that mice did not make substantial eye movements towards the visual stimuli (Supplementary Fig.  10 ). Crucially, mice showed an increased tendency to repeat their previous successful choice when the previous contrast was low (F(1,4) = 30.06, p  = 0.005), but no effect of a change in stimulus altitude (F(1,4) = 0.24, p  = 0.65) and no interaction between changes in stimulus altitude and contrast (F(1,4,) = 0.51, p  = 0.51, repeated-measures ANOVA). A Bayes factor analysis revealed moderate evidence against the hypothesis that a change in spatial location from a previous high contrast stimulus leads to an increased repetition bias (BF 10  = 0.25), a central prediction of the sensory adaptation hypothesis. Our results are thus inconsistent with sensory adaptation, and point towards confidence-dependent updating of choice values underlying choice repetition. Furthermore, the decreased tendency to repeat the 1- relative to 2-back choice could not be explained by mice pursuing two distinct decision-making strategies on distinct sets of trials, either alternating the previous choice while acting largely independent of the longer-term history or repeating past choices monotonically weighted by their n-back position. Instead, the 1- to 2-back increase in choice weight was pervasive, whenever mice were engaged with the decision-making task (Supplementary Fig.  11 ).

Striatal dopamine tracks behavioral choice history biases

Finally, we sought to elucidate the neural bases of adaptive choice history biases. Central to the hypothesis of reinforcement learning underlying choice history biases is that mice compute reward predictions and reward prediction errors while making perceptual decisions. The activity of midbrain dopamine neurons and the resulting dopamine release in the striatum are strongly implicated in this process 21 , 52 , 53 , 54 , 55 . A key target area implicated in learning of stimulus-choice associations is the dorsolateral striatum 56 (DLS). We measured dopamine release in the DLS, using ultra-fast dopamine sensors 57 (GRAB DA2m ) in combination with fiber photometry (Fig.  5a ), in order to compare striatal dopamine signals with our multi-trial POMDP RL model.

figure 5

a Schematic of fiber photometry in the dorsolateral striatum (DLS), imaging dopamine release using ultra-fast dopamine sensors (GRAB DA2m ). b Psychometric curves of mice (n  = 6 ) during the dopamine recording experiment. Gray lines show individual mice, whereas black data points show the group average. Error bars in all panels depict SEMs. c Trial-by-trial dopamine responses from all sessions of an example animal, aligned to stimulus onset (white dashed line) and sorted by trial type (left column) and outcome time (black dots). d Group-average dopamine response ( n  = 6 mice, d , e , f , and i ), aligned to stimulus onset (gray dashed line), split by stimulus contrast (gray to black; correct trials only). Gray shaded area indicates the stimulus time period over which we averaged stimulus responses ( e and i ; excluding time points after reward delivery). e Average stimulus-evoked dopamine responses as a function of current absolute contrast (rewarded trials only; averaged over gray shaded area in ( d )). f History kernel of the probabilistic choice model fit to mice data (solid line) and the predicted history kernel of the multi-trial POMDP RL model (dashed line). Mice exhibit a higher 2- compared to 1-back choice weight (inset, one-sided t-test, t(5) = −2.73, p  = 0.02). Shaded region depicts SEMs. g Expected reward value Q (black) of the multi-trial POMDP RL model as a function of current contrast (absolute value, i.e., independent of its L or R position), separately when the current stimulus is on the same (repeat, blue) or opposite side (alternate, orange) as the 2-back stimulus (current and previous rewarded trials only). Q reflects the expected value before the choice, computed by summing Q L and Q R weighted by the probability of making a left and right choice. For Q C , the expected value after the choice, see Supplementary Fig.  13 . h Difference in Q between repetitions and alternations of stimulus side (ΔQ) as a function of n-back trial (current and previous rewarded trials only). The single-trial (blue) and multi-trial models (pink) make opposite predictions about the difference between 1- and 2-back trials. While the single-trial model predicts a higher ΔDA for the 1-back versus to 2-back trial, the multi-trial model predicts a higher ΔDA for the 2- compared to 1-back trial. i Difference in stimulus-evoked dopamine responses between repetitions and alternations of stimulus side (ΔDA) as a function of n-back trial (current and previous rewarded trials only). Mice exhibit a higher 2- compared to 1-back ΔDA (inset, two-sided t-test, t(5) = 3.51, p  = 0.017). 1- and 2-back ΔDA are not significantly different from zero (n.s., two-sided t-tests), respectively. Source data are provided as a Source Data file.

We measured dopamine release in the DLS while mice ( n  = 6) performed our visual decision-making task (Fig.  1a ). Since we found signatures of adaptation to trial history even in the neutral (random) environment (see Fig.  4a–c ), we focused on measuring choice behavior and dopamine release using random stimulus sequences, maximizing the number of trials in this condition ( n  = 11,931 trials). Mice successfully mastered the decision-making task (Fig.  5b ) and exhibited a similar choice history kernel to previous experiments (Fig.  5f ; c.f. Figs.  2a and 4c ). Importantly, they expressed the characteristic increase in choice weights from the 1- to 2-back trial (Fig.  5f , inset; t(5) = -2.73, p  = 0.02, one-sided t-test), which is a key distinguishing feature between the multi- and single-trial models. Dopamine release in the DLS was strongly modulated both at the time of stimulus and outcome (Fig.  5c–e and Supplementary Fig.  12f ). Following the stimulus presentation, dopamine increased with stimulus contrast (F(1.58,7.92) = 16.995, p  = 0.002, repeated-measures ANOVA; Fig.  5d and e ), largely independent of the stimulus side relative to the recorded hemisphere (F(1,5) = 0.69, p  = 0.44, repeated-measures ANOVA; Supplementary Fig.  12b ). Conversely, following reward delivery dopamine negatively scaled with stimulus contrast (F(1.6,8) = 117.34, p = 1.8 × 10 −6 , repeated-measures ANOVA), yielding the highest dopamine release for rewarded zero contrast trials (Supplementary Fig.  12f and g ). These signals are consistent with dopamine encoding the expected reward value during stimulus processing, for which a high contrast stimulus predicts a highly certain reward (model Q ; Fig.  5g , black line), and dopamine encoding the reward prediction error during outcome (model δ ), for which the maximal surprise occurs when receiving a reward given a maximally uncertain stimulus. Further evidence supporting the hypothesis that dopamine responses during the stimulus period reflected the expected reward value Q comes from the observation that dopamine scaled with the uncertainty of the previous stimulus, consistent with the model predictions (Supplementary Fig.  12c and d ).

In order to examine effects of trial history on dopamine responses, we focused on dopamine release during the stimulus period, which unlike reward-related responses were not complicated by an overlap of stimulus and reward responses caused by GRAB DA sensor dynamics (Supplementary Fig.  12f ). An important feature of the POMDP RL model is that the expected reward value Q not only depends on the contrast of the current stimulus, but also on the history of past choices and outcomes. In particular, the expected reward is higher if the current stimulus is presented on the same rather than the opposite side as previously rewarded trials (Fig.  5g , blue vs orange), promoting the behavioral choice repetition bias. Crucially however, since the multi-trial agent uses its memory of the previous rewarded choice to reduce the 1-back repetition bias in the neutral environment, the difference in expected reward between stimulus repetitions and alternations (ΔQ) is larger for the 2- compared to 1-back trial, mimicking the empirically observed choice history kernel (Fig.  5h , pink; c.f. Fig.  5f ). This is in contrast to the single-trial agent, which predicts a larger ΔQ for the 1- compared to 2-back trial (Fig.  5h , blue). To test whether stimulus-evoked dopamine tracked the multi-trial agent’s dynamic of ΔQ across trials, we analogously computed the difference in stimulus-evoked dopamine between stimulus repetitions and alternations of the n-back rewarded side (ΔDA, Fig.  5i ). We found that dopamine responses indeed tracked the multi-trial model’s predictions of ΔQ: the dopamine response to the current stimulus was larger when the current stimulus was a repetition of the 2-back compared to the 1-back trial (ΔDA, 2- minus 1-back: t(5) = 3.51, p  = 0.017, two-sided t-test; p  = 0.018, two-sided permutation test based on shuffled trial history), and gradually decayed across further n-back trials (Fig.  5i ). Importantly, this dependence of dopamine on stimulus repetition or alternation of the 1- or 2-back trial was not evident during the pre-stimulus period of the current trial (ΔDA, 2- minus 1-back: t(5) = −0.30, p  = 0.78, two-sided t-test; Supplementary Fig.  12e ), and thus was not a carryover of residual dopamine from previous trials. Given the fast response times of mice and sensor dynamics, it was not possible to clearly separate dopamine signals before and after the current choice. However, the pattern of ΔQ (Fig.  5g and h ) holds regardless of whether it is calculated before or after the choice (Supplementary Fig.  13 ), thus making similar predictions for pre-outcome dopamine signals. We note that while the multi-trial agent exhibits a near-zero, but slightly positive 1-back ΔQ, we observed a numerically negative 1-back ΔDA, which was not statistically significantly different from zero (t(5) = −2.20, p  = 0.08, two-sided t-test). We speculate that such a negative 1-back ΔDA in the DLS could be driven by an unequal weighting of reward predictions, calculated based on the perceptual and memory components of the multi-trial belief state. Indeed, reward expectations based solely on memory exhibit a negative 1-back ΔDA, and we found that an overweighting of this memory-based expectation could approximate the empirically observed DA release (Supplementary Fig.  14 ). Although speculative, we therefore consider it possible that DLS DA release might report a reward expectation that is slightly skewed towards memory-based expectations. It is possible that other striatal regions such as DMS, which receives more input from visual cortical areas 58 , 59 , might more strongly encode reward expectations based on perception. More experiments will be necessary to investigate this hypothesis.

Together, our results indicate that pre-outcome dopamine in the DLS closely tracks the expected reward value of the multi-trial POMDP RL agent. Given dopamine’s prominent role in striatal neural plasticity and learning, we speculate that it may thus play a role in mediating adaptive choice history biases in perceptual decisions.

Our world presents a multitude of temporal regularities, allowing observers to predict the future from the past. Here, we show that mice can exploit such regularities to improve their perceptual decisions by flexibly adapting their reliance on past choices to the temporal structure of the stimulus sequence. We find that this adaptation of perceptual choice history biases is well captured by a normative reinforcement learning algorithm with multi-trial belief states, comprising both current trial sensory and previous trial memory states. Moreover, we show that learning guided by multi-trial belief states occurs even in mice that never experienced the manipulation of temporal regularities, suggesting that multi-trial learning may be a default strategy when making perceptually uncertain decisions. Lastly, we demonstrate that dopamine release in the DLS closely tracks behavioral biases and reward predictions in the multi-trial reinforcement learning model, pointing towards a plausible teaching signal linked to the learning and exploitation of temporal regularities in perceptual decisions.

It has been previously proposed that perceptual choice history biases can be explained by reinforcement learning mechanisms that are continually engaged to adjust perceptual decisions, even in highly trained decision-makers 32 . In the POMDP RL framework, observers continually evaluate and update the values of different choice options given their sensory confidence, choice, and feedback 33 , 34 , 52 . This framework has been fruitful in explaining the emergence of choice history biases, their dependence on previous sensory confidence, and the adaptation of choices to changes in reward value 21 . However, it does not explain how mice adapt their choice history biases to different temporal regularities, and in particular how mice learn to alternate from a previously rewarded choice when stimulus sequences favor alternations, as demonstrated in the current study. To account for these results, we developed a simple extension to the previous model: Mice assess the value of current choice options not only based on current sensory stimuli (perception-choice values) but also based on a memory of the previous trial’s rewarded choice (memory-choice values). While the trial-by-trial updating of perception-choice values leads to choice repetition, the concurrent learning of memory-choice values can attenuate or increase the tendency to repeat, allowing for a more flexible weighing of the previous trial. This minimal extension to the previous model explains several surprising patterns in our data. First, mice only adapt the influence of the previous choice across different temporal regularities, while similarly repeating choices of temporally more distant trials. Second, in contrast to the fast timescale of choice history biases, swiftly decaying over the past seven trials, the adaptation of the 1-back choice weight to temporal regularities is slow, developing over hundreds of trials. Third, when acting on random stimulus sequences mice more strongly repeat the 2-back compared to 1-back choice. Strikingly, all three empirical observations are captured by a model with a fixed set of parameters governing trial-by-trial learning in environments with different temporal regularities. This suggests that the empirical patterns arise from an interaction of a fixed set of learning rules with the temporal structure of the stimulus sequences.

Past perceptual decision-making studies in humans have shown similar reduced or muted 1- relative to 2-back choice weights for random stimulus sequences 10 , 60 . Moreover, similar reduced 1- relative to 2-back choice weights have been observed outside the perceptual domain in the context of a competitive matching pennies game in monkeys 61 . Similar to perceptual decision-making about random stimulus sequences, the optimal strategy in the matching pennies game is to make history-independent decisions. Instead, monkeys tend to repeat past decisions of their opponent - a pattern that can be exploited to their disadvantage and which the authors explained with a reinforcement learning model. Intriguingly, however, monkeys appear to be able to downregulate the repetition tendency of the 1-back choice specifically, thereby becoming less exploitable—a phenomenon that can be readily accounted for by the multi-trial reinforcement learning model. Together, these findings suggest that similar multi-trial learning strategies might hold across decision-making contexts and species.

At the neural level, we found that dopamine release in the DLS closely tracks behavioral biases and reward predictions of the multi-trial reinforcement learning model. The activity of midbrain dopamine neurons is thought to play a pivotal role in learning from past rewards, encoding predicted value prior to outcome, and reward prediction error after outcome 24 , 62 . Similarly, during perceptual decisions, dopamine signals encode predicted values and reward prediction errors graded by both reward value and sensory confidence and are causally involved in learning from past perceptual decisions 21 , 52 , 53 , 54 , 55 , 63 , 64 . In consonance, we found that dopamine release in the DLS is positively scaled with the current stimulus contrast during the stimulus period, in line with signaling predicted value, but negatively scaled during reward processing, in line with encoding a reward prediction error. Trial-by-trial changes in these dopamine signals closely tracked behavioral biases and reward predictions of the multi-trial reinforcement learning model. Our finding that dopamine release not only reports a perceptual prediction, but also memory-based predictions is in line with past research indicating that midbrain dopamine neurons are sensitive to contextual information signaled by trial history 65 . Importantly, dopaminergic pathways in the dorsal striatum have been proposed to be involved in choice selection 66 , 67 , 68 , and transient stimulation of dorsal striatal D1 neurons mimicked an additive change in choice values during decision-making 69 . Therefore, the history-dependent dopamine release in the DLS might be directly involved in promoting the adaptive behavioral choice history biases observed in the current study. Future studies that causally manipulate striatal dopamine release will be necessary to test this hypothesis.

We demonstrate that crucial signatures of choice history biases observed in the current study generalize across datasets, such as those by the International Brain Laboratory 19 , 20 (IBL), which uses a similar experimental setup. However, our manipulation of temporal regularities diverges from block switches used in the additional experimental manipulations of the IBL in important ways. While the full task of the IBL involves blockwise (i.e., average of 50 trials) manipulations of stimulus priors, the current study manipulates the local transition probability between successive trials, while keeping longer-term stimulus statistics balanced, therefore presenting a more subtle manipulation of input statistics. In particular, the use of alternating stimulus sequences enabled us to test whether mice learn to alternate from a previously rewarded choice, demonstrating that mice exhibit a flexible dependence on the previous trial given the prevailing temporal regularity.

The current study, while providing important insights into behavioral, computational, and neural bases of choice history biases, is not without limitations. First, while the multi-trial reinforcement learning model provides a parsimonious account of how mice rely on past rewarded choices, it does not adequately capture choice biases following unrewarded (error) trials. In particular, mice exhibited a bias to repeat unrewarded choices with similar strength across 1- to 7-back trials, indicating a slowly fluctuating tendency to repeat the same unrewarded choice (Supplementary Fig.  1d ). This tendency is not recapitulated by our model (Supplementary Fig.  15 ), and differs from human behavior, which is characterized by choice alternation after errors 60 . It likely reflects both session-by-session changes in history-independent response biases as well as periods of task disengagement in which mice ignore stimuli and instead repeatedly perform the same choice 31 . Indeed, we found that mice repeated the previous incorrect choice when they were disengaged, but tended to alternate after errors when engaged with the task (Supplementary Fig.  3 ). Thus, when focusing our analyses on periods of high task engagement, mice treated past incorrect trials more similar to humans and more consistent with a reinforcement learning agent, which predicts choice alternation following an unrewarded trial. However, the low proportion of error trials and their heterogeneity complicate a straightforward assessment of post-error responses. Nevertheless, they will be an important subject of investigation in future experimental and theoretical work.

Second, our conclusions are likely limited to adaptive history biases in settings involving trial-by-trial feedback. The presence of feedback, common in animal research, enables observers to learn most from maximally uncertain events, which is crucial for explaining how low decision confidence leads to strong choice repetition biases observed in this and previous datasets 9 , 20 , 21 , 32 . However, choice history biases occur in a wide range of experimental paradigms, many of which do not provide trial-by-trial feedback 1 , 70 . In the absence of feedback, human observers are more likely to repeat a previous choice when it was associated with high rather than low decision confidence 10 , 16 , 70 , 71 , 72 , opposite to the current and past findings, and consistent with Bayesian models of choice history biases 73 , 74 , 75 , 76 . Thus, there are multiple ways through which observers can leverage the past to facilitate future behavior, and the resulting perceptual choice history biases are likely subserved by a variety of different computations, such as learning 32 and inference 77 . As such, while our model offers an explanation for perceptual choice history biases and their dopaminergic signatures, it does not necessarily exclude other theoretical frameworks.

Third, since we found signatures of adaptation to trial history even in the neutral (random) environment (see Fig.  4a–c ), we focused on measuring choice behavior and dopamine release using random stimulus sequences, maximizing the number of trials in this condition. Importantly, we discovered that the choice history kernel in the neutral environment exhibited an important diagnostic to distinguish between the single- and multi-trial reinforcement learning models, namely the increase in 1- to 2-back choice history weight, which was recapitulated by the dopamine data. Nevertheless, it would be interesting to record dopamine release also during repeating and alternating sessions, and to investigate whether dopamine tracks the slow adaptation to the statistics of the environment across hundreds of trials.

Our results demonstrate that mice can flexibly adapt their choice history biases to different regularities of stimulus sequences in a visual decision-making paradigm. We show that a simple model-free POMDP RL algorithm based on multi-trial belief states accounts for the observed adaptive history biases and that striatal dopamine release closely follows the reward predictions of this algorithm. Our results suggest that choice history biases arise from continual learning that enables animals to exploit the temporal structure of the world to facilitate successful behavior.

The data for all experiments were collected from a total of 17 male C57BL/6J mice from Charles River UK, aged 10–30 weeks. The data of the behavioral experiment manipulating temporal regularities were collected from 10 mice. Of these mice, 3 animals also completed the experiment investigating sensory adaptation. Furthermore, we conducted dopamine recordings during perceptual decision-making in 6 mice. One of these mice also completed the sensory adaptation experiment. One mouse participated only in the sensory adaptation experiment. Mice were kept on a 12 h dark/light cycle, with an ambient temperature of 20–24° Celsius, and 40% humidity. All experiments were conducted according to the UK Animals Scientific Procedures Act (1986) under appropriate project and personal licenses.

Mice were implanted with a custom metal head plate to enable head fixation. To this end, animals were anesthetized with isoflurane and kept on a heating pad. Hair overlying the skull was shaved and the skin and the muscles over the central part of the skull were removed. The skull was thoroughly washed with saline, followed by cleaning with a sterile cortex buffer. The head plate was attached to the bone posterior to bregma using dental cement (Super-Bond C&B; Sun Medical).

For dopamine recording experiments, after attaching the headplate, we made a craniotomy over the left or right DLS. We injected 460 nL of diluted viral construct (pAAV-hsyn-GRAB DA2m ) into the left or right DLS (AP: +0.5 mm from bregma; ML: ±2.5 mm from midline; DV: 2.8 mm from dura). We implanted an optical fiber (200 mm, Neurophotometrics Ltd) over the DLS, with the tip 0.3 mm above the injection site. The fiber was secured to the head plate and skull using dental cement.

Materials and apparatus

Mice were trained on a standardized behavioral rig, consisting of an LCD screen (9.7” diagonal), a custom 3D-printed mouse holder, and a head bar fixation clamp to hold a mouse such that its forepaws rested on a steering wheel 19 , 20 . Silicone tubing controlled by a pinch valve was used to deliver water rewards to the mouse. The general structure of the rig was constructed from Thorlabs parts and was placed inside an acoustical cabinet. The experiments were controlled by freely available custom-made software 78 , written in MATLAB (Mathworks). Data analyses were performed with custom-made software written in Matlab 2020b, R (version 3.6.3), and Python 3.7. The GLM-HMM analysis was performed with the openly available glmhmm package ( https://github.com/irisstone/glmhmm ).

Visual decision-making task

Behavioral training in the visual decision-making task started at least 5 days after the surgery. Animals were handled and acclimatized to head fixation for at least 3 days, and then trained in a 2-alternative forced choice visual detection task 19 . After mice kept the wheel still for at least 0.7 to 0.8 s, a sinusoidal grating stimulus of varying contrast appeared on either the left or right side of the screen (±35° azimuth, 0° altitude). Grating stimuli had a fixed vertical orientation, were windowed by a Gaussian envelope (3.5° s.d.), and had a spatial frequency of 0.19 cycles/° with a random spatial phase. Concomitant to the appearance of the visual stimulus, a brief tone was played to indicate that the trial had started (0.1 s, 5 kHz). Mice were able to move the grating stimulus on the monitor by turning a wheel located beneath their forepaws. If mice correctly moved the stimulus 35° to the center of the screen, they immediately received a water reward (2–3 μL). Conversely, if mice incorrectly moved the stimulus 35° towards the periphery or failed to reach either threshold within 60 s a noise burst was played for 0.5 s and they received a timeout of 2 s. The inter-trial interval was randomly sampled from a uniform distribution between 0.5 and 1 s (1 and 3 s in the dopamine recording experiment). In the initial days of training, only 100% contrast stimuli were presented. Stimuli with lower contrasts were gradually introduced after mice exhibited sufficiently accurate performance on 100% contrast trials (>70% correct). During this training period, incorrect responses on easy trials (contrast \(\ge\) 50%) were followed by “repeat” trials, in which the previous stimulus location was repeated. The full task included six contrast levels (100, 50, 25, 12.5, 6.25 and 0% contrast). Once mice reached stable behavior on the full task, repeat trials were switched off, and mice proceeded to the main experiment.

In the main experiment (Figs. 1 – 3 ), we investigated whether mice adapt their choice history biases to temporal regularities. To this end, we manipulated the transitional probabilities between successive stimulus locations (left or right) across experimental sessions. Specifically, the probability of a repetition was defined as follows:

where n indexes trials. The repetition probability was held constant within each session but varied across experimental sessions, which were run on different days. In the Neutral environment, the repetition probability was set to 0.5, yielding entirely random stimulus sequences. In the Repeating and Alternating environments, the repetition probability was set to 0.8 and 0.2, respectively. For eight out of ten mice, the order of environments was pseudo-randomized such that three consecutive Repeating or Alternating sessions were interleaved with two consecutive Neutral sessions. For the remaining two mice, the environments were presented in random order.

Experimental sessions in which mice showed a high level of disengagement from the task were excluded from further analysis, based on the following criteria. We fit a psychometric curve to each session’s data, using a maximum likelihood procedure:

where P(“Right”) describes the mouse’s probability to give a rightward response, F is the logistic function, c is the stimulus contrast, γ and λ denote the right and left lapse rates, α is the bias and β is the contrast threshold. We excluded sessions in which the absolute bias was larger than 0.16, or either left or right lapse rates exceeded 0.2. We further excluded sessions in which the choice accuracy on easy 100% contrast trials was lower than 80%. This led to the exclusion of 56 out of 345 sessions (16%). Finally, we excluded trials in which the response time was longer than 12 s, thereby excluding 1507 out of 128,490 trials (1.2%).

Probabilistic choice model

In order to quantify the mice’s choice history biases across the three environments with different temporal regularities, we fitted a probabilistic choice model to the responses of each mouse. In particular, we modeled the probability of the mouse making a rightward choice as a weighted sum of the current trial’s sensory evidence, the successful and unsuccessful response directions of the past seven trials, and a general bias term, passed through a logistic link function:

where z is the decision variable, which is computed for each trial i in the following way:

\({{w}}_{{c}}\) is the coefficient associated with contrast \({{{{\rm{I}}}}}_{{{{\rm{c}}}}}\) and \({{{{\rm{I}}}}}_{{{{\rm{c}}}}}\) is an indicator function, which is 1 if contrast c was presented on trial i and 0 otherwise. Coefficients \({{w}}_{{n}}^{{+}}\) and \({{w}}_{{n}}^{{-}}\) weigh the influence of the correct (+) and incorrect (−) choices of the past seven trials, denoted by \({{r}}^{{+}}\) and \({{r}}^{{-}}\) , respectively. Here, \({{r}}^{{+}}\) was −1 if the correct n-back choice was left, +1 if it was right, and zero if the n-back choice was incorrect. Likewise, \({{r}}^{{-}}\) was −1 if the incorrect n-back choice was left, +1 if it was right, and zero if the n-back choice was correct. \({{w}}_{{0}}\) is a constant representing the overall bias of the mouse. We chose to model a temporal horizon of the past seven trials, since the autocorrelation in the stimulus sequences introduced by the transition probabilities decayed over this timeframe and were negligible beyond seven trials back (see Supplementary Fig.  1e ). It is important to model choice history kernels that cover the timeframe of autocorrelations in the stimulus sequences, in order to avoid long-term history biases and long-term autocorrelations to confound the estimate of short-term choice history weights across the different environments. While it is possible that mice exhibit even more slowly fluctuating history biases, beyond seven trials back, such slow biases would not differentially bias choice weights across the different environments.

We fitted the probabilistic choice model separately to the response data of each mouse in each environment, using a lasso regression implemented in glmnet 79 . The regularization parameter λ was determined using a 10-fold cross-validation procedure.

We further investigated how correct choice weights \({w}_{n}^{+}\) developed across consecutive days when transitioning from the neutral into a regular environment ( \({{\rm{neutral}}}\to {{\rm{repeating}}}\) or \({{\rm{neutral}}}\to {{\rm{alternating}}}\) ). To this end, we subdivided the data into regular (repeating/alternating) sessions, which were preceded by a neutral session (day 1), or preceded by a neutral followed by one or two regular sessions of the same kind (days 2 and 3). We fitted the probabilistic choice model to the choices of days 1, 2, and 3 using the same procedure described above.

Finally, we tested whether correct choice weights \({w}_{n}^{+}\) in the neutral environment depended on the temporal regularity that mice experienced in the preceding session. Hence, we fitted the probabilistic choice model to neutral session data, separately for sessions preceded by a repeating or alternating environment.

Parameter recovery analysis

In order to investigate whether the probabilistic choice model was able to recover choice history kernels in the face of autocorrelated stimulus sequences, we conducted a parameter recovery analysis. First, we obtained a set of “ground truth” choice history kernels by fitting the probabilistic choice model to the observed choices of each mouse in each environment, as previously described. We then used these postulated ground truth parameters to simulate synthetic choice data in response to stimulus sequences of all three environments. The simulated choice data was not identical to the empirical choice data due to the probabilistic nature of the model. However, it was generated according to the same stimulus and choice history weights that we reported previously. We simulated 100 synthetic datasets for each mouse. We then asked whether we could recover the ground truth choice history kernels when subjecting the simulated choice data to our analysis pipeline. We found that ground truth history kernels were accurately recovered, regardless of which stimulus sequence was used to simulate choices (Supplementary Fig.  1g–i ). That is, an artificial observer with a neutral choice history kernel was estimated to have a neutral choice history kernel regardless of the stimulus sequence to which it responded (Supplementary Fig.  1g ). Furthermore, this neutral history kernel was distinctly different from the repeating and alternating history kernels (Supplementary Fig.  1h and i ). This indicates that our model fitting procedure is able to accurately recover choice history kernels of the shape that we report in the main results.

Hidden Markov Model analysis of engaged and disengaged trials

We identified engaged and disengaged trials using a modeling framework based on hidden Markov Models 31 (HMM). In particular, we fit a HMM with two states, and their state-specific Bernoulli Generalized Linear Models (GLMs) to our neutral environment task data. The GLMs consisted of a stimulus regressor and stimulus-independent bias term. We hypothesized to obtain one state with high stimulus weight, reflecting high engagement with the task, and one state with low stimulus weight, reflecting disengagement from the task. This was indeed borne out in the data (Supplementary Fig.  3a and b ). We repeated the probabilistic choice model analysis described above separately for engaged and disengaged current and 1- to 7-back trials.

Reanalysis of data by the International Brain Laboratory

When analyzing correct choice history weights in the neutral environment, we observed that mice were more strongly biased to repeat their response given in the 2-back trial compared to the more recent 1-back response. One might surmise that this increase in choice repetition from 1- to 2-back trials could be driven by the exposure of the mice to multiple transition probabilities in the current study. In order to test whether this phenomenon was indeed particular to the current experimental design, involving stimulus sequences with biased transition probabilities, we analyzed a large, publicly available dataset of mice performing a similar visual decision-making task, which had not experienced biased transition probabilities 20 . We selected sessions in which mice had mastered the task, but before they were exposed to sessions involving blocked manipulations of stimulus locations (full task of the IBL study). Mice had therefore only experienced random stimulus sequences. Using the same exclusion criteria described above, we analyzed data of 99 mice in 583 sessions, comprising 471,173 choices. To estimate choice history weights, we fitted the same probabilistic choice model as described above to the data of each mouse. The data analyzed in the current study is available here:

https://figshare.com/articles/dataset/A_standardized_and_reproducible_method_to_measure_decision-making_in_mice_Data/11636748

Reinforcement learning models

In order to investigate the computational principles underlying history bias adaptation, we adopted and extended a previously proposed Reinforcement Learning (RL) model based on a partially observable Markov decision process (POMDP 21 , 33 ). We will first describe the previously proposed model, which we term single-trial POMPD RL model, as this model’s belief state was solely based on the current trial’s visual stimuli. We will then describe an extension to this model, which we term multi-trial POMPD RL model. In addition to the current visual stimuli, the belief state of the multi-trial POMPD RL model incorporates a memory of the previous rewarded choice.

Single-trial POMPD RL model

In our visual decision-making task, the state of the current trial (left or right) is uncertain and therefore only partially observable due to the presence of low contrast stimuli and sensory noise. The model assumes that the agent forms an internal estimate \(\hat{s}\) of the true signed stimulus contrast \(s\) , which is normally distributed with constant variance around the true stimulus contrast: \({{\rm{p}}}\left(\hat{{{\rm{s}}}} | {{\rm{s}}}\right){{=}}{{\mathscr{N}}}(\hat{{{\rm{s}}}}{{\rm{;s}}},\,{{{\rm{\sigma }}}}^{2})\) . Following Bayesian principles, the agent’s belief about the current state is not limited to the point estimate \(\hat{s}\) , but consists of a belief distribution over all possible values of s given \(\hat{s}\) . The belief distribution is given by Bayes rule:

We assume that the prior belief about \(s\) is uniform, yielding a Gaussian belief distribution \({{\rm{p}}}\left({{\rm{s}}} | \hat{{{\rm{s}}}}\right)\) with the same variance as the sensory noise distribution and mean \(\hat{s}\) : \({{\rm{p}}}\left({{\rm{s}}} | \hat{{{\rm{s}}}}\right){{\mathscr{=}}}{{\mathscr{N}}}({{\rm{s;}}}\hat{{{\rm{s}}}},\,{{{\rm{\sigma }}}}^{2})\) . The agent’s belief that the stimulus was presented on the right side of the monitor, \({P}_{R}={{\rm{p}}}\left({{\rm{s}}} > 0 | \hat{{{\rm{s}}}}\right)\) , is given by:

The agent’s belief that the stimulus was presented on the left side is given by \({P}_{L}=1-{P}_{R}\) .

The agent combines this belief state \(\left[{P}_{L},{P}_{R}\right]\) based on the current stimulus with stored values for making left and right choices in left and right perceptual states, given by q choice,state in order to compute expected values for left and right choices:

Where Q L and Q R denote the expected value for left and right choices, respectively.

The agent then uses these expected values together with a softmax decision to compute a probability of making a rightward choice, p(“Rightward choice”):

where T denotes the softmax temperature, introducing decision noise. The agent then decides for one of the two choice options using a biased coin flip, based on p(“Rightward choice”).

Following the choice, the agent observes the associated outcome r , which is 1 if the agents chose correctly and zero otherwise. It then computes a prediction error \(\delta\) by comparing the outcome to the expected value of the chosen option Q C :

Given this prediction error, the agent updates the values associated with the chosen option \({q}_{C,{P}_{L}}\) and \({q}_{C,{P}_{R}}\) by weighing the prediction error with a learning rate \(\alpha\) and the belief of having occupied the particular state:

We allowed the agent to have two distinct learning rates that are used when predictions errors \(\delta\) are positive ( \({\alpha }^{+}\) ) or negative ( \({\alpha }^{-}\) ).

The single-trial POMPD RL model thus had four free parameters, consisting of sensory noise \({\sigma }^{2}\) , decision noise T , as well as positive and negative learning rates ( \({\alpha }^{+}\) and \({\alpha }^{-}\) ).

Multi-trial POMPD RL model

We extended the single-trial POMPD RL model, by augmenting the state representation with a memory of the rewarded choice of the previous trial. Therefore, in addition to P L and P R which describe the agent’s belief about the current stimulus being left or right, the agent’s state further comprised memories M L and M R , which were computed as follows:

The parameter \(\lambda\) reflects the memory strength of the agent, i.e., how strongly the agent relies on the memory of the previous rewarded choice. \(\lambda\) was bounded by zero (no knowledge of the previous rewarded choice) and 1 (perfect knowledge of the previous rewarded choice). Therefore, besides learning values of pairings between perceptual states and choices ( \({q}_{{choice},{P}_{\, \cdot \, }}\) ), the multi-trial agent additionally learned values of pairings between memory states and choices ( \({q}_{{choice},{M}_{\, \cdot \, }}\) ). The expected values of left and right choice options were thus computed as:

The expected value of the current choice was thus both influenced by immediately accessible perceptual information of the current trial, as well as memory information carried over from the previous trial.

The multi-trial POMPD RL model had five free parameters, consisting of sensory noise \({\sigma }^{2}\) , decision noise T , positive and negative learning rates ( \({\alpha }^{+}\) and \({\alpha }^{-}\) ), and memory strength \(\lambda\) . The same learning rate was used to update perception-choice and memory-choice values.

Multi-trial POMDP RL model with extended memory

The memory of the multi-trial POMDP RL model was limited to the previous rewarded choice. We further extended this model, such that the agent’s memory was based on multiple past trials. In particular, memory states M L and M R were computed as an exponentially weighted sum of past rewarded left and right choices:

where I L and I R denote indicator functions, evaluated to 1 when the i th choice was a rewarded left or right choice, respectively, and zero otherwise. M 0 was the initial memory strength for the 1-back trial, and weights w i implemented an exponential decay function

The exponential decay function was defined over elapsed trials, where t denotes the index of the current trial and i denotes the index of a previous trial. Thus, \(\tau\) denotes the number of elapsed trials beyond the 1-back trial after which the contribution of a past choice to memory has decreased to 1/ e  = 0.37 of its initial value. Similar to the multi-trial POMDP, the agent learned values of pairings between these exponentially weighted memories of past rewarded choices and the current choice ( \({q}_{{choice},{M}_{\, \cdot \, }}\) ).

The multi-trial POMDP RL model with extended memory had six free parameters, consisting of sensory noise, decision noise, positive and negative learning rates, initial memory strength M 0 , and exponential decay time constant \(\tau\) .

Fitting procedure and model comparison

We fit the single- and multi-trial POMPD RL models to the joint data of the neutral, repeating, and alternating environments pooled across mice. In particular, we used the empirical coefficients of the probabilistic choice model (see above) fit to the pooled data in order to define a cost function based on summary statistics of the mice’s behavior 80 . The cost function was defined as the sum of squared difference between empirical and model coefficients, comprising the current stimulus weights of each environment, the 1- to 7-back correct choice weights of each environment, as well as the 1- to 7-back correct choice weights of neutral sessions following repeating and alternating sessions:

We minimized the cost function using Bayesian adaptive direct search (BADS 81 ). BADS alternates between a series of fast, local Bayesian optimization steps and a systematic, slower exploration of a mesh grid. When fitting the single-trial POMPD RL model, we constrained the parameter space as follows. The sensory noise parameter \({{\sigma }}^{{2}}\) was constrained to the interval [0.05, 0.5], decision noise to [0.01,1], and positive and negative learning rates to [0.01, 1]. When fitting the multi-trial POMPD RL model, we additionally constrained the memory strength parameter to lie between zero (no memory) and 1 (perfect memory). The single-trial model was thus a special case of the multi-trial model without memory ( \(\lambda\)  = 0). That is, the single-trial model was nested in the multi-trial model. We repeated the optimization process from 9 different starting points to confirm converging solutions. Starting points were arranged on a grid with low and high learning rates \({\alpha }^{+}\)  =  \({\alpha }^{-}\)  = [0.2, 0.8], and weak and strong memory \(\lambda\)  = [0.05, 0.15]. Sensory noise \({{\sigma }}^{{2}}\) and decision noise T were initialized with 0.1 and 0.3, respectively. In addition to the 8 starting points spanned by this grid, we added a 9th starting point determined by manual optimization ( \({{\sigma }}^{{2}}\)  = 0.14, \({\alpha }^{+}\)  = 0.95, \({\alpha }^{-}\)  = 0.95, T  = 0.34, \(\lambda\)  = 0.05). The best fitting parameters for the multi-trial model were: \({{\sigma }}^{{2}}\)  = 0.09, \({\alpha }^{+}\)  = 0.96, \({\alpha }^{-}\)  = 0.97, T  = 0.36, \(\lambda\)  = 0.07, and multiple starting points converged to similar solutions. The best fitting parameters for the single trial model were: \({{\sigma }}^{{2}}\)  = 0.1, \({\alpha }^{+}\)  = 0.41, \({\alpha }^{-}\)  = 1, T  = 0.4. We formally compared the best-fitting single- and multi-trial models using an F -test for nested models, as well as the Bayesian information criterion (BIC).

Visual decision-making task investigating spatially-specific adaptation biases

We trained mice ( n  = 5) in an alternative version of the visual decision-making task, in order to test whether the decreased tendency to repeat the 1- relative to 2-back choice and the increased probability to repeat decisions based on low sensory evidence could be due to spatially-specific sensory adaptation to stimulus contrast. The experimental design was similar to the standard task, with the important exception that grating stimuli were smaller (2° s.d. Gaussian envelope) and the vertical location of the stimuli was randomly varied between ±15° altitude (3 mice) or ±10° altitude (2 mice) across trials. We reasoned that the vertical distance of 20–30 visual degrees would be sufficient to stimulate partly non-overlapping visual cortical neural populations, given receptive field sizes of 5–12 visual degrees (half-width at half-maximum) in primary visual cortex 82 . We trained mice to report the horizontal location of the visual stimulus (left/right), independent of its vertical location (high/low). This allowed us to manipulate whether successive stimuli were presented at the same or different spatial location (lower and upper visual field), even when those stimuli required the same choice (left or right; Fig.  4e ). We applied the same session and trial exclusion criteria as for the main task. To analyze the dependence of current choices on the previous trial, we selected trials that were preceded by a correctly identified stimulus on the same side as the current stimulus. We binned trials into those preceded by a low or high contrast stimulus (6.25 and 100%), presented at the same or different vertical location. For each of the four bins, we computed the probability that the mouse repeats the previous choice, averaged across current stimulus contrasts. Owing to the fact that current and previous stimuli were presented at the same side, this repetition probability was larger than 0.5, but could nevertheless be modulated by previous contrast and vertical location. We tested the effects of previous contrast (high/low) and vertical location (same/different) with a 2 × 2 repeated-measures ANOVA. Furthermore, to provide statistical evidence against the hypothesis of spatially-specific sensory adaptation, we conducted a Bayes Factor analysis, quantifying evidence for the one-sided hypothesis that, due to a release from sensory adaptation, mice would be more likely to repeat a previous choice when a previous high contrast stimulus was presented at a different rather than same spatial location (Fig.  4g ). The Bayes Factor was calculated with a default prior scale of 0.707.

Dopamine recording experiment

To measure dopamine release in the DLS, we employed fiber photometry 83 , 84 . We used a single chronically implanted optical fiber to deliver excitation light and collect emitted fluorescence. We used multiple excitation wavelengths (470 and 415 nm), delivered on alternating frames (sampling rate of 40 Hz), serving as target and isosbestic control wavelengths, respectively. To remove movement and photobleaching artifacts, we subtracted the isosbestic control from the target signal. In particular, for each session, we computed a least-squares linear fit of the isosbestic to the target signal. We subtracted the fitted isosbestic from the target signal and normalized by the fitted isosbestic signal to compute ΔF/F:

The resulting signal was further high-pass filtered by subtracting a moving average (25 s averaging window) and z-scored. For the main analyses, we aligned the z-scored ΔF/F to stimulus or reward onset times and baselined the signal to the pre-stimulus period of the current trial (−0.5–0 s relative to stimulus onset). For assessing whether our results could be explained by a slow carryover of the previous trial’s dopamine response, we conducted an alternative analysis, in which we baselined the current trial’s dopamine signal to the previous trial’s pre-stimulus period and assessed the dopamine signal before the onset of the current stimulus.

We trained mice ( n  = 6) in the same visual decision-making task as described above, with the exception that we increased the inter-trial delay period (ITI), sampling from a uniform distribution between 1 and 3 s to allow the dopamine signal to return to baseline before the next trial. Due to the increased ITI, the median inter-stimulus interval was 5.55 s. To maximize the number of trials, we only presented neutral (random) stimulus sequences.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The behavioral and photometry data generated in this study have been deposited in the Figshare database under accession code https://doi.org/10.6084/m9.figshare.24179829 . The behavioral data of the International Brain Laboratory used in this study are available in the Figshare database under accession code https://doi.org/10.6084/m9.figshare.11636748.v7 . Source data are provided with this paper.

Code availability

The code generated in this study has been deposited in the Figshare database under accession code https://doi.org/10.6084/m9.figshare.24179829 .

Akaishi, R., Umeda, K., Nagase, A. & Sakai, K. Autonomous mechanism of internal choice estimate underlies decision inertia. Neuron 81 , 195–206 (2014).

Article   PubMed   Google Scholar  

Akrami, A., Kopec, C. D., Diamond, M. E. & Brody, C. D. Posterior parietal cortex represents sensory history and mediates its effects on behaviour. Nature 554 , 368–372 (2018).

Article   ADS   PubMed   Google Scholar  

Busse, L. et al. The detection of visual contrast in the behaving mouse. J. Neurosci. 31 , 11351–11361 (2011).

Article   PubMed   PubMed Central   Google Scholar  

Fischer, J. & Whitney, D. Serial dependence in visual perception. Nat. Neurosci. 17 , 738–743 (2014).

Fritsche, M., Mostert, P. & de Lange, F. P. Opposite effects of recent history on perception and decision. Curr. Biol. 27 , 590–595 (2017).

Fründ, I., Wichmann, F. A. & Macke, J. H. Quantifying the effect of intertrial dependence on perceptual decisions. J. Vis. 14 , 9 (2014).

Gold, J. I., Law, C.-T., Connolly, P. & Bennur, S. The relative influences of priors and sensory evidence on an oculomotor decision variable during perceptual learning. J. Neurophysiol. 100 , 2653–2668 (2008).

Hwang, E. J., Dahlen, J. E., Mukundan, M. & Komiyama, T. History-based action selection bias in posterior parietal cortex. Nat. Commun. 8 , 1242 (2017).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Mendonça, A. G. et al. The impact of learning on perceptual decisions and its implication for speed-accuracy tradeoffs. Nat. Commun. 11 , 2757 (2020).

Urai, A. E., Braun, A. & Donner, T. H. Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias. Nat. Commun. 8 , 14637 (2017).

Cho, R. Y. et al. Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced-choice task. Cogn., Affect., Behav. Neurosci. 2 , 283–299 (2002).

Fan, Y., Gold, J. I. & Ding, L. Ongoing, rational calibration of reward-driven perceptual biases. eLife 7 , e36018 (2018).

Marcos, E. et al. Neural variability in premotor cortex is modulated by trial history and predicts behavioral performance. Neuron 78 , 249–255 (2013).

Tsunada, J., Cohen, Y. & Gold, J. I. Post-decision processing in primate prefrontal cortex influences subsequent choices on an auditory decision-making task. eLife 8 , e46770 (2019).

Yu, A. J. & Cohen, J. D. Sequential effects: superstition or rational behavior? Adv. Neural Inf. Process Syst. 21 , 1873–1880 (2008).

PubMed   PubMed Central   Google Scholar  

Braun, A., Urai, A. E. & Donner, T. H. Adaptive history biases result from confidence-weighted accumulation of past choices. J. Neurosci. 38 , 2418–2429 (2018).

Abrahamyan, A., Silva, L. L., Dakin, S. C., Carandini, M. & Gardner, J. L. Adaptable history biases in human perceptual decisions. Proc. Natl Acad. Sci. USA 113 , E3548–E3557 (2016).

Hermoso-Mendizabal, A. et al. Response outcomes gate the impact of expectations on perceptual decisions. Nat. Commun. 11 , 1057 (2020).

Burgess, C. P. et al. High-yield methods for accurate two-alternative visual psychophysics in head-fixed mice. Cell Rep. 20 , 2513–2524 (2017).

The International Brain Laboratory et al. Standardized and reproducible measurement of decision-making in mice. eLife 10 , e63711 (2021).

Article   PubMed Central   Google Scholar  

Lak, A. et al. Dopaminergic and prefrontal basis of learning from sensory confidence and reward value. Neuron 105 , 700–711.e6 (2020).

Reinert, S., Hübener, M., Bonhoeffer, T. & Goltstein, P. M. Mouse prefrontal cortex represents learned rules for categorization. Nature 593 , 411–417 (2021).

Pinto, L. et al. An accumulation-of-evidence task using visual pulses for mice navigating in virtual reality. Front. Behav. Neurosci. 12 , 36 (2018).

Schultz, W., Dayan, P. & Montague, P. R. A neural substrate of prediction and reward. Science 275 , 8 (1997).

Article   Google Scholar  

Cox, J. & Witten, I. B. Striatal circuits for reward learning and decision-making. Nat. Rev. Neurosci. 20 , 482–494 (2019).

Reynolds, J. N. J. & Wickens, J. R. Dopamine-dependent plasticity of corticostriatal synapses. Neural Netw. 15 , 507–521 (2002).

Stauffer, W. R. et al. Dopamine neuron-specific optogenetic stimulation in rhesus macaques. Cell 166 , 1564–1571 e6 (2016).

Parker, N. F. et al. Reward and choice encoding in terminals of midbrain dopamine neurons depends on striatal target. Nat. Neurosci. 19 , 845–854 (2016).

Kim, K. M. et al. Optogenetic mimicry of the transient activation of dopamine neurons by natural reward is sufficient for operant reinforcement. PLoS ONE 7 , e33612 (2012).

Hamid, A. A. et al. Mesolimbic dopamine signals the value of work. Nat. Neurosci. 19 , 117–126 (2016).

Ashwood, Z. C. et al. Mice alternate between discrete strategies during perceptual decision-making. Nat. Neurosci. 25 , 201–212 (2022).

Lak, A. et al. Reinforcement biases subsequent perceptual decisions when confidence is low, a widespread behavioral phenomenon. eLife 9 , e49834 (2020).

Dayan, P. & Daw, N. D. Decision theory, reinforcement learning, and the brain. Cogn. Affect. Behav. Neurosci. 8 , 429–453 (2008).

Rao, R. P. N. Decision making under uncertainty: a neural model based on partially observable Markov decision processes. Front. Comput. Neurosci. 4 , 146 (2010).

Findling, C. et al. Brain-Wide Representations of Prior Information in Mouse Decision-Making . http://biorxiv.org/lookup/doi/10.1101/2023.07.04.547684 , https://doi.org/10.1101/2023.07.04.547684 (2023).

Müller, J. R., Metha, A. B., Krauskopf, J. & Lennie, P. Rapid adaptation in visual cortex to the structure of images. Science 285 , 1405–1408 (1999).

Kohn, A. & Movshon, J. A. Neuronal adaptation to visual motion in area MT of the Macaque. Neuron 39 , 681–691 (2003).

Carandini, M. & Ferster, D. A tonic hyperpolarization underlying contrast adaptation in cat visual cortex. Science 276 , 949–952 (1997).

Fritsche, M., Solomon, S. G. & de Lange, F. P. Brief stimuli cast a persistent long-term trace in visual cortex. J. Neurosci. 42 , 1999–2010 (2022).

Keller, A. J. et al. Stimulus relevance modulates contrast adaptation in visual cortex. eLife 6 , e21589 (2017).

Gardner, J. L. et al. Contrast adaptation and representation in human early visual cortex. Neuron 47 , 607–620 (2005).

Ahmed, B. An intracellular study of the contrast-dependence of neuronal activity in cat visual cortex. Cereb. Cortex 7 , 559–570 (1997).

King, J. L., Lowe, M. P. & Crowder, N. A. Contrast adaptation is spatial frequency specific in mouse primary visual cortex. Neuroscience 310 , 198–205 (2015).

King, J. L., Lowe, M. P., Stover, K. R., Wong, A. A. & Crowder, N. A. Adaptive processes in thalamus and cortex revealed by silencing of primary visual cortex during contrast adaptation. Curr. Biol. 26 , 1295–1300 (2016).

Gibson, J. J. & Radner, M. Adaptation, after-effect and contrast in the perception of tilted lines. I. Quant. Stud. J. Exp. Psychol. 20 , 453–467 (1937).

Anstis, S., Verstraten, F. A. J. & Mather, G. The motion aftereffect. Trends Cogn. Sci. 2 , 111–117 (1998).

Webster, M. A. & Mollon, J. D. Changes in colour appearance following post-receptoral adaptation. Nature 349 , 235–238 (1991).

Thompson, P. & Burr, D. Visual aftereffects. Curr. Biol. 19 , R11–R14 (2009).

Webster, M. A. Visual adaptation. Annu. Rev. Vis. Sci. 1 , 547–567 (2015).

Blakemore, C. & Campbell, F. W. On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images. J. Physiol. 203 , 237–260 (1969).

Snowden, R. J. & Hammett, S. T. Spatial frequency adaptation: threshold elevation and perceived contrast. Vis. Res. 36 , 1797–1809 (1996).

Lak, A., Nomoto, K., Keramati, M., Sakagami, M. & Kepecs, A. Midbrain dopamine neurons signal belief in choice accuracy during a perceptual decision. Curr. Biol. 27 , 821–832 (2017).

Moss, M. M., Zatka-Haas, P., Harris, K. D., Carandini, M. & Lak, A. Dopamine axons in dorsal striatum encode contralateral visual stimuli and choices. J. Neurosci . JN-RM-0490-21 https://doi.org/10.1523/JNEUROSCI.0490-21.2021 (2021).

Tsutsui-Kimura, I. et al. Distinct temporal difference error signals in dopamine axons in three regions of the striatum in a decision-making task. eLife 9 , e62390 (2020).

Sarno, S., De Lafuente, V., Romo, R. & Parga, N. Dopamine reward prediction error signal codes the temporal evaluation of a perceptual decision report. Proc. Natl Acad. Sci. USA 114 , E10494–E10503 (2017).

Balleine, B. W., Delgado, M. R. & Hikosaka, O. The role of the dorsal striatum in reward and decision-making. J. Neurosci. 27 , 8161–8165 (2007).

Sun, F. et al. Next-generation GRAB sensors for monitoring dopaminergic activity in vivo. Nat. Methods 17 , 1156–1166 (2020).

Khibnik, L. A., Tritsch, N. X. & Sabatini, B. L. A direct projection from mouse primary visual cortex to dorsomedial striatum. PLoS ONE 9 , e104501 (2014).

Hunnicutt, B. J. et al. A comprehensive excitatory input map of the striatum reveals novel functional organization. Elife . 5 , e19103 (2016).

Del Río, M., De Lange, F. P., Fritsche, M. & Ward, J. Perceptual confirmation bias and decision bias underlie adaptation to sequential regularities. J. Vis. 24 , 5 (2024).

Kim, S., Hwang, J., Seo, H. & Lee, D. Valuation of uncertain and delayed rewards in primate prefrontal cortex. Neural Netw. 22 , 294–304 (2009).

Cohen, J. Y., Haesler, S., Vong, L., Lowell, B. B. & Uchida, N. Neuron-type-specific signals for reward and punishment in the ventral tegmental area. Nature 482 , 85–88 (2012).

Schmack, K., Bosc, M., Ott, T., Sturgill, J. F. & Kepecs, A. Striatal dopamine mediates hallucination-like perception in mice. Science 372 , eabf4740 (2021).

Bang, D. et al. Sub-second dopamine and serotonin signaling in human striatum during perceptual decision-making. Neuron 108 , 999–1010.e6 (2020).

Nakahara, H., Itoh, H., Kawagoe, R., Takikawa, Y. & Hikosaka, O. Dopamine neurons can represent context-dependent prediction error. Neuron 41 , 269–280 (2004).

Albin, R. L., Young, A. B. & Penney, J. B. The functional anatomy of basal ganglia disorders. Trends Neurosci. 12 , 366–375 (1989).

Gerfen, C. R. The neostriatal mosaic: multiple levels of compartmental organization in the basal ganglia. Annu. Rev. Neurosci. 15 , 285–320 (1992).

DeLong, M. R. Primate models of movement disorders of basal ganglia origin. Trends Neurosci. 13 , 281–285 (1990).

Tai, L.-H., Lee, A. M., Benavidez, N., Bonci, A. & Wilbrecht, L. Transient stimulation of distinct subpopulations of striatal neurons mimics changes in action value. Nat. Neurosci. 15 , 1281–1289 (2012).

Bosch, E., Fritsche, M., Ehinger, B.V. & de Lange, F. P. Opposite Effects of Choice History and Stimulus History Resolve a Paradox of Sequential Choice Bias . http://biorxiv.org/lookup/doi/10.1101/2020.02.14.948919 , https://doi.org/10.1101/2020.02.14.948919 (2020).

Suárez-Pinilla, M., Seth, A. K. & Roseboom, W. Serial dependence in the perception of visual variance. J. Vis. 18 , 4 (2018).

Samaha, J., Switzky, M. & Postle, B. R. Confidence boosts serial dependence in orientation estimation. J. Vis. 19 , 25 (2019).

van Bergen, R. S. & Jehee, J. F. M. Probabilistic representation in human visual cortex reflects uncertainty in serial decisions. J. Neurosci. 39 , 8164–8176 (2019).

Fritsche, M., Spaak, E. & de Lange, F. P. A Bayesian and Efficient Observer Model Explains Concurrent Attractive and Repulsive History Biases in Visual Perception . http://biorxiv.org/lookup/doi/10.1101/2020.01.22.915553 , https://doi.org/10.1101/2020.01.22.915553 (2020).

Cicchini, G. M., Mikellidou, K. & Burr, D. C. The functional role of serial dependence. Proc. R. Soc. Lond. B 285 , 20181722 (2018).

Google Scholar  

Schwiedrzik, C. M. et al. Untangling perceptual memory: hysteresis and adaptation map into separate cortical networks. Cereb. Cortex 24 , 1152–1164 (2014).

Meyniel, F., Maheu, M. & Dehaene, S. Human inferences about sequences: a minimal transition probability model. PLoS Comput Biol. 12 , e1005260 (2016).

Bhagat, J., Wells, M. J., Harris, K. D., Carandini, M. & Burgess, C. P. Rigbox: an open-source toolbox for probing neurons and behavior. eNeuro 7 , ENEURO.0406–19.2020 (2020).

Friedman, J., Hastie, T. & Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33 , 1–22 (2010).

Bogacz, R. & Cohen, J. D. Parameterization of connectionist models. Behav. Res. Methods Instrum. Comput, 36 , 732–741 (2004).

Acerbi, L. & Ma, W. J. Practical Bayesian optimization for model fitting with Bayesian adaptive direct search. Adv. Neural Inf. Process. Syst. 30 , 1834–1844 (2017).

Niell, C. M. & Stryker, M. P. Highly selective receptive fields in mouse visual cortex. J. Neurosci. 28 , 7520–7536 (2008).

Gunaydin, L. A. et al. Natural neural projection dynamics underlying social behavior. Cell 157 , 1535–1551 (2014).

Lerner, T. N. et al. Intact-brain analyses reveal distinct information carried by SNc dopamine subcircuits. Cell 162 , 635–647 (2015).

Download references

Acknowledgements

This research was supported by the following grants: NWO Rubicon Fellowship (019.211EN.006) and HFSP Long-Term Fellowship (LT0045/2022-L) to M.F.; BBSRC (BB/S006338/1) and MRC (MC_UU_00003/1) to R.B.; Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (213465/Z/18/Z) to A.L. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.

Author information

Authors and affiliations.

Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, UK

Matthias Fritsche, Antara Majumdar, Lauren Strickland, Samuel Liebana Garcia & Armin Lak

Institute of Behavioral Neuroscience, University College London, London, UK

Lauren Strickland

MRC Brain Network Dynamics Unit, University of Oxford, Oxford, UK

Rafal Bogacz

You can also search for this author in PubMed   Google Scholar

Contributions

M.F. and A.L. conceived and designed the study. M.F., A.M., and L.S. performed the experiments and acquired the data. M.F. performed the formal analysis, with inputs from S.L.G., R.B., and A.L. M.F. and A.L. interpreted the results, with contributions from S.L.G. and R.B. M.F. and A.L. wrote the manuscript, with valuable revisions from all authors.

Corresponding authors

Correspondence to Matthias Fritsche or Armin Lak .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fritsche, M., Majumdar, A., Strickland, L. et al. Temporal regularities shape perceptual decisions and striatal dopamine signals. Nat Commun 15 , 7093 (2024). https://doi.org/10.1038/s41467-024-51393-8

Download citation

Received : 21 March 2024

Accepted : 05 August 2024

Published : 17 August 2024

DOI : https://doi.org/10.1038/s41467-024-51393-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

hypothesis versus prediction

IMAGES

  1. Hypothesis vs Prediction|Difference between hypothesis & prediction|Hypothesis prediction difference

    hypothesis versus prediction

  2. PPT

    hypothesis versus prediction

  3. Hypothesis vs Prediction: Difference and Comparison

    hypothesis versus prediction

  4. Difference Between Making a Hypothesis and Prediction

    hypothesis versus prediction

  5. Explain the Difference Between a Hypothesis and Prediction

    hypothesis versus prediction

  6. Difference Between Hypothesis and Prediction (with Comparison Chart

    hypothesis versus prediction

COMMENTS

  1. Difference Between Making a Hypothesis and Prediction

    The difference between hypothesis and prediction is explained through explanations & examples. Use our simple table for hypothesis vs prediction reference. Dictionary

  2. Hypothesis vs. Prediction: What's the Difference?

    Even though people sometimes use these terms interchangeably, hypotheses and predictions are two different things. Here are some of the primary differences between them: Hypothesis. Prediction. Format. Statements with variables. Commonly "if, then" statements. Function. Provides testable claim for research.

  3. Prediction vs Hypothesis: In-Depth Comparison

    A hypothesis is typically based on prior knowledge, observations, or existing theories, and it aims to provide a possible explanation for a particular phenomenon or set of observations. While predictions focus on forecasting future events, hypotheses are concerned with explaining and understanding existing phenomena.

  4. What's the Real Difference Between Hypothesis and Prediction

    Prediction. A prediction is also a type of guess, in fact, it is a guesswork in the true sense of the word. It is not an educated guess, like a hypothesis, i.e., it is based on established facts. While making a prediction for various applications, you have to take into account all the current observations.

  5. Difference Between Hypothesis and Prediction

    Hypothesis always have an explanation or reason, whereas prediction does not have any explanation. Hypothesis formulation takes a long time. Conversely, making predictions about a future happening does not take much time. Hypothesis defines a phenomenon, which may be a future or a past event. Unlike, prediction, which always anticipates about ...

  6. Hypothesis vs. Prediction

    Conclusion. In conclusion, hypotheses and predictions are important concepts in scientific research. While a hypothesis is a testable and falsifiable statement that serves as a starting point for investigation, a prediction is a specific anticipated outcome or result that guides the research process. Hypotheses are specific, measurable, and can ...

  7. Understanding Hypotheses and Predictions

    Prediction. On the other hand, a prediction is the outcome you would observe if your hypothesis were correct. Predictions are often written in the form of "if, and, then" statements, as in, "if my hypothesis is true, and I were to do this test, then this is what I will observe.". Following our sparrow example, you could predict that ...

  8. PDF Understanding Hypotheses, Predictions, Laws, and Theories

    Hypotheses, Predictions, and Laws The term hypothesis is being used in various ways; namely, a causal hypothesis, a descriptive hypothesis, a statistical and null hypothesis, and to mean a prediction, as shown in Table 1. Let us consider each of these uses. At its heart, science is about developing explanations about the universe.

  9. What is the Difference Between Hypothesis and Prediction

    Relationship Between Hypothesis and Prediction. Based on a hypothesis, one can create a prediction; Also, a hypothesis will enable predictions through the act of deductive reasoning. Furthermore, the prediction is the outcome that can be observed if the hypothesis were supported proven by the experiment. Difference Between Hypothesis and Prediction

  10. Hypotheses Versus Predictions

    First, hypotheses don't predict; people do. You can say that a prediction arose from a hypothesis, but you can't say, or shouldn't say, that a hypothesis predicts something. Second, beware of the ...

  11. Difference Between Hypothesis and Prediction

    Hypothesis vs Prediction. The terms "hypothesis" and "prediction" are often used interchangeably by some people. However, this should not be the case because the two are completely different. While a hypothesis is a guess that is predominantly used in science, a prediction is a guess that is mostly accepted out of science.

  12. Hypotheses Versus Predictions

    First, hypotheses don't predict; people do. You can say that a prediction arose from a hypothesis, but you can't say, or shouldn't say, that a hypothesis predicts something. Second, beware of the ...

  13. Hypotheses Versus Predictions

    First, hypotheses don't predict; people do. You can say that a prediction arose from a hypothesis, but you can't say, or shouldn't say, that a hypothesis predicts something. Second, beware of the ...

  14. A Guide to Hypothesis vs. Prediction (With Examples)

    Differences between hypothesis vs. prediction Some differences between a hypothesis and prediction include: Expression Researchers may write a hypothesis as a statement with specific variables. For example, the hypothesis can be drinking coffee before sleeping leads to loss of sleep. The variables are either independent or dependent.

  15. Hypothesis vs. Prediction

    A hypothesis is a guess which explains why something might be happening in the present while a prediction is a guess which forecasts something about the future, using direct observation, data, and intuition. A hypothesis is written as a testable statement whereas a prediction is often written as an " if, then " statement.

  16. Hypothesis vs. Prediction: Differences and Characteristics

    Hypothesis vs. prediction If you want to understand each concept and how to use them, you may benefit from comparing hypothesis vs. prediction. Scientists often perform experiments to determine specific phenomena, identify trends, and analyze the properties of objects. Before the experiment, a scientist might create a hypothesis, which can ...

  17. Hypothesis vs Prediction: Differences and Comparison

    Hypothesis vs Prediction: Similarities between Hypothesis and Prediction Both hypothesis and prediction are statements defining the relationship between variables or the result of an event. A hypothesis and a prediction can be tested, verified and rejected or supported by evidence for the purpose of future research.

  18. Hypothesis vs. Prediction: What's the Difference?

    A hypothesis serves as a proposed explanation for a phenomenon or set of observations. It forms the basis of research or experimentation. A prediction, meanwhile, is a specific statement forecasting a particular event or outcome based on the hypothesis. A hypothesis is often formed after observing and analyzing a particular pattern or phenomenon.

  19. Hypothesis vs Prediction

    A Hypothesis is an uncertain explanation regarding a phenomenon or event. It is widely used as a base for conducting tests and the results of the tests determine the acceptance or rejection of the hypothesis. Prediction is generally associated with a non-scientific guess. It defines the outcome of future events based on observation, experience ...

  20. Hypotheses Versus Predictions

    First, hypotheses don't predict; people do. You can say that a prediction arose from a hypothesis, but you can't say, or shouldn't say, that a hypothesis predicts something. Second, beware of the ...

  21. Hypotheses versus predictions

    A hypothesis is a mechanism or theory that you are testing. It should be testable in a variety of ways and species. If it depends on your measurements, it is not a hypothesis. A prediction is how your specific experimental conditions and measurements will play out if the hypothesis is true. I.e when I do x, y will happen.

  22. Difference Between Hypothesis and Prediction

    Hypothesis vs Prediction. If you have every studied science in English, you will probably know the words "hypothesis" and "prediction." Many people think that these two words mean the same thing. However, they actually have some small but important differences. The following article will define "hypothesis" and "prediction" and ...

  23. Temporal regularities shape perceptual decisions and striatal dopamine

    Central to the hypothesis of reinforcement learning underlying choice history biases is that mice compute reward predictions and reward prediction errors while making perceptual decisions.