• Reviews / Why join our community?
  • For companies
  • Frequently asked questions

hypotheses user research

User Research – The Importance of Hypotheses

It is easy to be tempted to look at the objective of your user research and pump out a solution that fits your best idea of how to achieve those objectives. That’s because experienced professionals can be quite good at that but then again they can also be very bad at it.

It is better to take your objectives and generate some hypothetical situations and then test those hypotheses with your users before turning them into concrete action. This gives you (and hopefully your clients) more confidence in your ideas or it highlights the need for changing those hypotheses because they don’t work in reality.

Let’s say that your objective is to create a network where people can access short (say a chapter) parts of a full text before they decide to buy the text or not. (Rather like Amazon does).

hypotheses user research

You can create some simple hypotheses around this objective in a few minutes brainstorming .

User-Attitude

We think that people would like to share their favourite clips with others on Facebook and Twitter.

User-Behaviour

We think that people will only share their favourite authors and books. They won’t share things that aren’t important to them.

User-Social Context

We think that people will be more likely to share their favourite authors and books if they are already popular with other users.

Why does this matter?

One of the things about design projects is that when you have a group of intelligent, able and enthusiastic developers, stakeholders , etc. that they all bring their own biases and understanding to the table when determining the objectives for a project. Those objectives may be completely sound but the only way to know this is to test those ideas with your users.

hypotheses user research

You cannot force a user to meet your objectives. You have to shape your objectives to what a user wants/needs to do with your product.

What happens to our product if our users don’t want to share their reading material with others? What if they feel that Facebook, Twitter, etc. are platforms where they want to share images and videos but not large amounts of text?

hypotheses user research

If you generate hypotheses for your user-research; you can test them at the relevant stage of research. The benefits include:

  • Articulating a hypothesis makes it easy for your team to be sure that you’re testing the right thing.
  • Articulating a hypothesis often guides us to a quick solution as to how to test that hypothesis.
  • It is easy to communicate the results of your research against these hypotheses. For example:
  • We thought people would want to share their favourite authors on social networks and they did.
  • We believed that the popularity of an author would relate to their “sharability” but we found that most readers wanted to emphasize their own unique taste and are more likely to share obscure but moving works than those already in the public eye.

Header Image: Author/Copyright holder: Dave. Copyright terms and licence: CC BY-NC-ND 2.0

User Experience: The Beginner’s Guide

hypotheses user research

Get Weekly Design Tips

What you should read next, picture perfect: how to use visuals to elevate your ux/ui design portfolio case studies.

hypotheses user research

4 Tips to Amplify the Potential of Your UX/UI Design Portfolio

hypotheses user research

No Experience? No Problem! 3 Ways to Find Projects for Your UX/UI Design Portfolio Case Studies

hypotheses user research

The Myths of Mobile Design and Why They Matter

hypotheses user research

5 Steps for Human-Centered Mobile Design

hypotheses user research

Customer Journey Maps — Walking a Mile in Your Customer’s Shoes

hypotheses user research

  • 1.1k shares

How to Design for AR Experiences on the Go

hypotheses user research

Your Gateway to UX Design: Norman Doors

hypotheses user research

  • 2 weeks ago

How to Select the Best Idea by the End of an Ideation Session

hypotheses user research

  • 3 weeks ago

Revolutionize UX Design with VR Experiences

hypotheses user research

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this article , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this article.

New to UX Design? We’re giving you a free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

New to UX Design? We’re Giving You a Free ebook!

Integrations

What's new?

In-Product Prompts

Participant Management

Interview Studies

Prototype Testing

Card Sorting

Tree Testing

Live Website Testing

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Maze Research Success Hub

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Creating a research hypothesis: How to formulate and test UX expectations

User Research

Mar 21, 2024

Creating a research hypothesis: How to formulate and test UX expectations

A research hypothesis helps guide your UX research with focused predictions you can test and learn from. Here’s how to formulate your own hypotheses.

Armin Tanovic

Armin Tanovic

All great products were once just thoughts—the spark of an idea waiting to be turned into something tangible.

A research hypothesis in UX is very similar. It’s the starting point for your user research; the jumping off point for your product development initiatives.

Formulating a UX research hypothesis helps guide your UX research project in the right direction, collect insights, and evaluate not only whether an idea is worth pursuing, but how to go after it.

In this article, we’ll cover what a research hypothesis is, how it's relevant to UX research, and the best formula to create your own hypothesis and put it to the test.

Test your hypothesis with Maze

Maze lets you validate your design and test research hypotheses to move forward with authentic user insights.

hypotheses user research

What defines a research hypothesis?

A research hypothesis is a statement or prediction that needs testing to be proven or disproven.

Let’s say you’ve got an inkling that making a change to a feature icon will increase the number of users that engage with it—with some minor adjustments, this theory becomes a research hypothesis: “ Adjusting Feature X’s icon will increase daily average users by 20% ”.

A research hypothesis is the starting point that guides user research . It takes your thought and turns it into something you can quantify and evaluate. In this case, you could conduct usability tests and user surveys, and run A/B tests to see if you’re right—or, just as importantly, wrong .

A good research hypothesis has three main features:

  • Specificity: A hypothesis should clearly define what variables you’re studying and what you expect an outcome to be, without ambiguity in its wording
  • Relevance: A research hypothesis should have significance for your research project by addressing a potential opportunity for improvement
  • Testability: Your research hypothesis must be able to be tested in some way such as empirical observation or data collection

What is the difference between a research hypothesis and a research question?

Research questions and research hypotheses are often treated as one and the same, but they’re not quite identical.

A research hypothesis acts as a prediction or educated guess of outcomes , while a research question poses a query on the subject you’re investigating. Put simply, a research hypothesis is a statement, whereas a research question is (you guessed it) a question.

For example, here’s a research hypothesis: “ Implementing a navigation bar on our dashboard will improve customer satisfaction scores by 10%. ”

This statement acts as a testable prediction. It doesn’t pose a question, it’s a prediction. Here’s what the same hypothesis would look like as a research question: “ Will integrating a navigation bar on our dashboard improve customer satisfaction scores? ”

The distinction is minor, and both are focused on uncovering the truth behind the topic, but they’re not quite the same.

Why do you use a research hypothesis in UX?

Research hypotheses in UX are used to establish the direction of a particular study, research project, or test. Formulating a hypothesis and testing it ensures the UX research you conduct is methodical, focused, and actionable. It aids every phase of your research process , acting as a north star that guides your efforts toward successful product development .

Typically, UX researchers will formulate a testable hypothesis to help them fulfill a broader objective, such as improving customer experience or product usability. They’ll then conduct user research to gain insights into their prediction and confirm or reject the hypothesis.

A proven or disproven hypothesis will tell if your prediction is right, and whether you should move forward with your proposed design—or if it's back to the drawing board.

Formulating a hypothesis can be helpful in anything from prototype testing to idea validation, and design iteration. Put simply, it’s one of the first steps in conducting user research.

Whether you’re in the initial stages of product discovery for a new product, a single feature, or conducting ongoing research, a strong hypothesis presents a clear purpose and angle for your research It also helps understand which user research methodology to use to get your answers.

What are the types of research hypotheses?

Not all hypotheses are built the same—there are different types with different objectives. Understanding the different types enables you to formulate a research hypothesis that outlines the angle you need to take to prove or disprove your predictions.

Here are some of the different types of hypotheses to keep in mind.

Null and alternative hypotheses

While a normal research hypothesis predicts that a specific outcome will occur based upon a certain change of variables, a null hypothesis predicts that no difference will occur when you introduce a new condition.

By that reasoning, a null hypothesis would be:

  • Adding a new CTA button to the top of our homepage will make no difference in conversions

Null hypotheses are useful because they help outline what your test or research study is trying to dis prove, rather than prove, through a research hypothesis.

An alternative hypothesis states the exact opposite of a null hypothesis. It proposes that a certain change will occur when you introduce a new condition or variable. For example:

  • Adding a CTA button to the top of our homepage will cause a difference in conversion rates

Simple hypotheses and complex hypotheses

A simple hypothesis is a prediction that includes only two variables in a cause-and-effect sequence, with one variable dependent on the other. It predicts that you'll achieve a particular outcome based on a certain condition. The outcome is known as the dependent variable and the change causing it is the independent variable .

For example, this is a simple hypothesis:

  • Including the search function on our mobile app will increase user retention

The expected outcome of increasing user retention is based on the condition of including a new search function. But, what happens when there are more than two factors at play?

We get what’s called a complex hypothesis. Instead of a simple condition and outcome, complex hypotheses include multiple results. This makes them a perfect research hypothesis type for framing complex studies or tracking multiple KPIs based on a single action.

Building upon our previous example, a complex research hypothesis could be:

  • Including the search function on our mobile app will increase user retention and boost conversions

Directional and non-directional hypotheses

Research hypotheses can also differ in the specificity of outcomes. Put simply, any hypothesis that has a specific outcome or direction based on the relationship of its variables is a directional hypothesis . That means that our previous example of a simple hypothesis is also a directional hypothesis.

Non-directional hypotheses don’t specify the outcome or difference the variables will see. They just state that a difference exists. Following our example above, here’s what a non-directional hypothesis would look like:

  • Including the search function on our mobile app will make a difference in user retention

In this non-directional hypothesis, the direction of difference (increase/decrease) hasn’t been specified, we’ve just noted that there will be a difference.

The type of hypothesis you write helps guide your research—let’s get into it.

How to write and test your UX research hypothesis

Now we’ve covered the types of research hypothesis examples, it’s time to get practical.

Creating your research hypothesis is the first step in conducting successful user research.

Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development.

1. Formulate your hypothesis

Start by writing out your hypothesis in a way that’s specific and relevant to a distinct aspect of your user or product experience. Meaning: your prediction should include a design choice followed by the outcome you’d expect—this is what you’re looking to validate or reject.

Your proposed research hypothesis should also be testable through user research data analysis. There’s little point in a hypothesis you can’t test!

Let’s say your focus is your product’s user interface—and how you can improve it to better meet customer needs. A research hypothesis in this instance might be:

  • Adding a settings tab to the navigation bar will improve usability

By writing out a research hypothesis in this way, you’re able to conduct relevant user research to prove or disprove your hypothesis. You can then use the results of your research—and the validation or rejection of your hypothesis—to decide whether or not you need to make changes to your product’s interface.

2. Identify variables and choose your research method

Once you’ve got your hypothesis, you need to map out how exactly you’ll test it. Consider what variables relate to your hypothesis. In our case, the main variable of our outcome is adding a settings tab to the navigation bar.

Once you’ve defined the relevant variables, you’re in a better position to decide on the best UX research method for the job. If you’re after metrics that signal improvement, you’ll want to select a method yielding quantifiable results—like usability testing . If your outcome is geared toward what users feel, then research methods for qualitative user insights, like user interviews , are the way to go.

3. Carry out your study

It’s go time. Now you’ve got your hypothesis, identified the relevant variables, and outlined your method for testing them, you’re ready to run your study. This step involves recruiting participants for your study and reaching out to them through relevant channels like email, live website testing , or social media.

Given our hypothesis, our best bet is to conduct A/B and usability tests with a prototype that includes the additional UI elements, then compare the usability metrics to see whether users find navigation easier with or without the settings button.

We can also follow up with UX surveys to get qualitative insights and ask users how they found the task, what they preferred about each design, and to see what additional customer insights we uncover.

💡 Want more insights from your usability tests? Maze Clips enables you to gather real-time recordings and reactions of users participating in usability tests .

4. Analyze your results and compare them to your hypothesis

By this point, you’ve neatly outlined a hypothesis, chosen a research method, and carried out your study. It’s now time to analyze your findings and evaluate whether they support or reject your hypothesis.

Look at the data you’ve collected and what it means. Given that we conducted usability testing, we’ll want to look to some key usability metrics for an indication of whether the additional settings button improves usability.

For example, with the usability task of ‘ In account settings, find your profile and change your username ’, we can conduct task analysis to compare the times spent on task and misclick rates of the new design, with those same metrics from the old design.

If you also conduct follow-up surveys or interviews, you can ask users directly about their experience and analyze their answers to gather additional qualitative data . Maze AI can handle the analysis automatically, but you can also manually read through responses to get an idea of what users think about the change.

By comparing the findings to your research hypothesis, you can identify whether your research accepts or rejects your hypothesis. If the majority of users struggle with finding the settings page within usability tests, but had a higher success rate with your new prototype, you’ve proved the hypothesis.

However, it's also crucial to acknowledge if the findings refute your hypothesis rather than prove it as true. Ruling something out is just as valuable as confirming a suspicion.

In either case, make sure to draw conclusions based on the relationship between the variables and store findings in your UX research repository . You can conduct deeper analysis with techniques like thematic analysis or affinity mapping .

UX research hypotheses: four best practices to guide your research

Knowing the big steps for formulating and testing a research hypothesis ensures that your next UX research project gives you focused, impactful results and insights. But, that’s only the tip of the research hypothesis iceberg. There are some best practices you’ll want to consider when using a hypothesis to test your UX design ideas.

Here are four research hypothesis best practices to help guide testing and make your UX research systematic and actionable.

Align your hypothesis to broader business and UX goals

Before you begin to formulate your hypothesis, be sure to pause and think about how it connects to broader goals in your UX strategy . This ensures that your efforts and predictions align with your overarching design and development goals.

For example, implementing a brand new navigation menu for current account holders might work for usability, but if the wider team is focused on boosting conversion rates for first-time site viewers, there might be a different research project to prioritize.

Create clear and actionable reports for stakeholders

Once you’ve conducted your testing and proved or disproved your hypothesis, UX reporting and analysis is the next step. You’ll need to present your findings to stakeholders in a way that's clear, concise, and actionable. If your hypothesis insights come in the form of metrics and statistics, then quantitative data visualization tools and reports will help stakeholders understand the significance of your study, while setting the stage for design changes and solutions.

If you went with a research method like user interviews, a narrative UX research report including key themes and findings, proposed solutions, and your original hypothesis will help inform your stakeholders on the best course of action.

Consider different user segments

While getting enough responses is crucial for proving or disproving your hypothesis, you’ll want to consider which users will give you the highest quality and most relevant responses. Remember to consider user personas —e.g. If you’re only introducing a change for premium users, exclude testing with users who are on a free trial of your product.

You can recruit and target specific user demographics with the Maze Panel —which enables you to search for and filter participants that meet your requirements. Doing so allows you to better understand how different users will respond to your hypothesis testing. It also helps you uncover specific needs or issues different users may have.

Involve stakeholders from the start

Before testing or even formulating a research hypothesis by yourself, ensure all your stakeholders are on board. Informing everyone of your plan to formulate and test your hypothesis does three things:

Firstly, it keeps your team in the loop . They’ll be able to inform you of any relevant insights, special considerations, or existing data they already have about your particular design change idea, or KPIs to consider that would benefit the wider team.

Secondly, informing stakeholders ensures seamless collaboration across multiple departments . Together, you’ll be able to fit your testing results into your overall CX strategy , ensuring alignment with business goals and broader objectives.

Finally, getting everyone involved enables them to contribute potential hypotheses to test . You’re not the only one with ideas about what changes could positively impact the user experience, and keeping everyone in the loop brings fresh ideas and perspectives to the table.

Test your UX research hypotheses with Maze

Formulating and testing out a research hypothesis is a great way to define the scope of your UX research project clearly. It helps keep research on track by providing a single statement to come back to and anchor your research in.

Whether you run usability tests or user interviews to assess your hypothesis—Maze's suite of advanced research methods enables you to get the in-depth user and customer insights you need.

Frequently asked questions about research hypothesis

What is the difference between a hypothesis and a problem statement in UX?

A research hypothesis describes the prediction or method of solving that problem. A problem statement, on the other hand, identifies a specific issue in your design that you intend to solve. A problem statement will typically include a user persona, an issue they have, and a desired outcome they need.

How many hypotheses should a UX research problem have?

Technically, there are no limits to the amount of hypotheses you can have for a certain problem or study. However, you should limit it to one hypothesis per specific issue in UX research. This ensures that you can conduct focused testing and reach clear, actionable results.

Verifying that you are not a robot...

Understanding Your Users: A Practical Guide to User Research Methods

Research areas.

Human-Computer Interaction and Visualization

Learn more about how we conduct our research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work.

Philosophy-light-banner

UX Research: Objectives, Assumptions, and Hypothesis

by Rick Dzekman

An often neglected step in UX research

Introduction

UX research should always be done for a clear purpose – otherwise you’re wasting the both your time and the time of your participants. But many people who do UX research fail to properly articulate the purpose in their research objectives. A major issue is that the research objectives include assumptions that have not been properly defined.

When planning UX research you have some goal in mind:

  • For generative research it’s usually to find out something about users or customers that you previously did not know
  • For evaluative research it’s usually to identify any potential issues in a solution

As part of this goal you write down research objectives that help you achieve that goal. But for many researchers (especially more junior ones) they are missing some key steps:

  • How will those research objectives help to reach that goal?
  • What assumptions have you made that are necessary for those objectives to reach that goal?
  • How does your research (questions, tasks, observations, etc.) help meet those objectives?
  • What kind of responses or observations do you need from your participants to meet those objectives?

Research objectives map to goals but that mapping requires assumptions. Each objective is broken down into sub-objectives which should lead to questions, tasks, or observations. The questions we ask in our research should map to some research objective and help reach the goal.

One approach people use is to write their objectives in the form of research hypothesis. There are a lot of problems when trying to validate a hypothesis with qualitative research and sometimes even with quantitative.

This article focuses largely on qualitative research: interviews, user tests, diary studies, ethnographic research, etc. With qualitative research in mind let’s start by taking a look at a few examples of UX research hypothesis and how they may be problematic.

Research hypothesis

Example hypothesis: users want to be able to filter products by colour.

At first it may seem that there are a number of ways to test this hypothesis with qualitative research. For example we might:

  • Observe users shopping on sites with and without colour filters and see whether or not they use them
  • Ask users who are interested in our products about how narrow down their choices
  • Run a diary study where participants document the ways they narrowed down their searches on various stores
  • Make a prototype with colour filters and see if participants use them unprompted

These approaches are all effective but they do not and cannot prove or disprove our hypothesis. It’s not that the research methods are ineffective it’s that the hypothesis itself is poorly expressed.

The first problem is that there are hidden assumptions made by this hypothesis. Presumably we would be doing this research to decide between a choice of possible filters we could implement. But there’s no obvious link between users wanting to filter by colour and a benefit from us implementing a colour filter. Users may say they want it but how will that actually benefit their experience?

The second problem with this hypothesis is that we’re asking a question about “users” in general. How many users would have to want colour filters before we could say that this hypothesis is true?

Example Hypothesis: Adding a colour filter would make it easier for users to find the right products

This is an obvious improvement to the first example but it still has problems. We could of course identify further assumptions but that will be true of pretty much any hypothesis. The problem again comes from speaking about users in general.

Perhaps if we add the ability to filter by colour it might make the possible filters crowded and make it more difficult for users who don’t need colour to find the filter that they do need. Perhaps there is a sample bias in our research participants that does not apply broadly to our user base.

It is difficult (though not impossible) to design research that could prove or disprove this hypothesis. Any such research would have to be quantitative in nature. And we would have to spend time mapping out what it means for something to be “easier” or what “the right products” are.

Example Hypothesis: Travelers book flights before they book their hotels

The problem with this hypothesis should now be obvious: what would it actually mean for this hypothesis to be proved or disproved? What portion of travelers would need to book their flights first for us to consider this true?

Example Hypothesis: Most users who come to our app know where and when they want to fly

This hypothesis is better because it talks about “most users” rather than users in general. “Most” would need to be better defined but at least this hypothesis is possible to prove or disprove.

We could address this hypothesis with quantitative research. If we found out that it was true we could focus our design around the primary use case or do further research about how to attract users at different stages of their journey.

However there is no clear way to prove or disprove this hypothesis with qualitative research. If the app has a million users and 15/20 research participants tell you that this is true would your findings generalise to the entire user base? The margin of error on that finding is 20-25%, meaning that the true results could be closer to 50% or even 100% depending on how unlucky you are with your sample.

Example Hypothesis: Customers want their bank to help them build better savings habits

There are many things wrong with this hypothesis but we will focus on the hidden assumptions and the links to design decisions. Two big assumptions are that (1) it’s possible to find out what research participants want and (2) people’s wants should dictate what features or services to provide.

Research objectives

One of the biggest problem with using hypotheses is that they set the wrong expectations about what your research results are telling you. In Thinking, Fast and Slow, Daniel Kahneman points out that:

  • “extreme outcomes (both high and low) are more likely to be found in small than in large samples”
  • “the prominence of causal intuitions is a recurrent theme in this book because people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning”
  • “when people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound”

Using a research hypothesis primes us to think that we have found some fundamental truth about user behaviour from our qualitative research. This leads to overconfidence about what the research is saying and to poor quality research that could simply have been skipped in exchange for simply making assumption. To once again quote Kahneman: “you do not believe that these results apply to you because they correspond to nothing in your subjective experience”.

We can fix these problems by instead putting our focus on research objectives. We pay attention to the reason that we are doing the research and work to understand if the results we get could help us with our objectives.

This does not get us off the hook however because we can still create poor research objectives.

Let’s look back at one of our prior hypothesis examples and try to find effective research objectives instead.

Example objectives: deciding on filters

In thinking about the colour filter we might imagine that this fits into a larger project where we are trying to decide what filters we should implement. This is decidedly different research to trying to decide what order to implement filters in or understand how they should work. In this case perhaps we have limited resources and just want to decide what to implement first.

A good approach would be quantitative research designed to produce some sort of ranking. But we should not dismiss qualitative research for this particular project – provided our assumptions are well defined.

Let’s consider this research objective: Understand how users might map their needs against the products that we offer . There are three key aspects to this objective:

  • “Understand” is a common form of research objective and is a way that qualitative research can discover things that we cannot find with quant. If we don’t yet understand some user attitude or behaviour we cannot quantify it. By focusing our objective on understanding we are looking at uncovering unknowns.
  • By using the word “might” we are not definitively stating that our research will reveal all of the ways that users think about their needs.
  • Our focus is on understanding the users’ mental models. Then we are not designing for what users say that they want and we aren’t even designing for existing behaviour. Instead we are designing for some underlying need.

The next step is to look at the assumptions that we are making. One assumption is that mental models are roughly the same between most people. So even though different users may have different problems that for the most part people tend to think about solving problems with the same mental machinery. As we do more research we might discover that this assumption is not true and there are distinctly different kinds of behaviours. Perhaps we know what those are in advance and we can recruit our research participants in a way that covers those distinct behaviours.

Another assumption is that if we understand our users’ mental models that we will be able to design a solution that will work for most people. There are of course more assumptions we could map but this is a good start.

Now let’s look at another research objective: Understand why users choose particular filters . Again we are looking to understand something that we did not know before.

Perhaps we have some prior research that tells us what the biggest pain points are that our products solve. If we have an understanding of why certain filters are used we can think about how those motivations fit in with our existing knowledge.

Mapping objectives to our research plan

Our actual research will involve some form of asking questions and/or making observations. It’s important that we don’t simply forget about our research objectives and start writing questions. This leads to completing research and realising that you haven’t captured anything about some specific objective.

An important step is to explicitly write down all the assumptions that we are making in our research and to update those assumptions as we write our questions or instructions. These assumptions will help us frame our research plan and make sure that we are actually learning the things that we think we are learning. Consider even high level assumptions such as: a solution we design with these insights will lead to a better experience, or that a better experience is necessarily better for the user.

Once we have our main assumptions defined the next step is to break our research objective down further.

Breaking down our objectives

The best way to consider this breakdown is to think about what things we could learn that would contribute to meeting our research objective. Let’s consider one of the previous examples: Understand how users might map their needs against the products that we offer

We may have an assumption that users do in fact have some mental representation of their needs that align with the products they might purchase. An aspect of this research objective is to understand whether or not this true. So two sub-objectives may be to (1) understand why users actually buy these sorts of products (if at all), and (2) understand how users go about choosing which product to buy.

Next we might want to understand what our users needs actually are or if we already have research about this understand which particular needs apply to our research participants and why.

And finally we would want to understand what factors go into addressing a particular need. We may leave this open ended or even show participants attributes of the products and ask which ones address those needs and why.

Once we have a list of sub-objectives we could continue to drill down until we feel we’ve exhausted all the nuances. If we’re happy with our objectives the next step is to think about what responses (or observations) we would need in order to answer those objectives.

It’s still important that we ask open ended questions and see what our participants say unprompted. But we also don’t want our research to be so open that we never actually make any progress on our research objectives.

Reviewing our objectives and pilot studies

At the end it’s important to review every task, question, scenario, etc. and seeing which research objectives are being addressed. This is vital to make sure that your planning is worthwhile and that you haven’t missed anything.

If there’s time it’s also useful to run a pilot study and analyse the responses to see if they help to address your objectives.

Plan accordingly

It should be easy to see why research hypothesis are not suitable for most qualitative research. While it is possible to create suitable hypothesis it is more often than not going to lead to poor quality research. This is because hypothesis create the impression that qualitative research can find things that generalise to the entire user base. In general this is not true for the sample sizes typically used for qualitative research and also generally not the reason that we do qualitative research in the first place.

Instead we should focus on producing effective research objectives and making sure every part of our research plan maps to a suitable objective.

MeasuringU Logo

Hypothesis Testing in the User Experience

hypotheses user research

It’s something we all have completed and if you have kids might see each year at the school science fair.

  • Does an expensive baseball travel farther than a cheaper one?
  • Which melts an ice block quicker, salt water or tap water?
  • Does changing the amount of vinegar affect the color when dying Easter eggs?

While the science project might be relegated to the halls of elementary schools or your fading childhood memory, it provides an important lesson for improving the user experience.

The science project provides us with a template for designing a better user experience. Form a clear hypothesis, identify metrics, and collect data to see if there is evidence to refute or confirm it. Hypothesis testing is at the heart of modern statistical thinking and a core part of the Lean methodology .

Instead of approaching design decisions with pure instinct and arguments in conference rooms, form a testable statement, invite users, define metrics, collect data and draw a conclusion.

  • Does requiring the user to double enter an email result result in more valid email addresses?
  • Will labels on the top of form fields or the left of form fields reduce the time to complete the form?
  • Does requiring the last four digits of your Social Security Number improve application rates over asking for a full SSN?
  • Do users have more trust in the website if we include the McAfee security symbol or the Verisign symbol ?
  • Do more users make purchases if the checkout button is blue or red?
  • Does a single long form generate higher form submissions than the division of the form on three smaller pages?
  • Will users find items faster using mega menu navigation or standard drop-down navigation?
  • Does the number of monthly invoices a small business sends affect which payment solution they prefer?
  • Do mobile users prefer to download an app to shop for furniture or use the website?

Each of the above questions is both testable and represents real examples. It’s best to have as specific a hypothesis as possible and isolate the variable of interest. Many of these hypotheses can be tested with a simple A/B test , unmoderated usability test , survey or some combination of them all .

Even before you collect any data, there is an immediate benefit gained from forming hypotheses. It forces you and your team to think through the assumptions in your designs and business decisions. For example, many registration systems require users to enter their email address twice. If an email address is wrong, in many cases a company has no communication with a prospective customer.

Requiring two email fields would presumably reduce the number of mistyped email addresses. But just like legislation can have unintended consequences, so do rules in the user interface. Do users just copy and paste their email thus negating the double fields? If you then disable the pasting of email addresses into the field, does this lead to more form abandonment and less overall customers?

With a clear hypothesis to test, the next step involves identifying metrics that help quantify the experience . Like most tests, you can use a simple binary metric (yes/no, pass/fail, convert/didn’t convert). For example, you could collect how many users registered using the double email vs. the single email form, how many submitted using the last four numbers of their SSN vs. the full SSN, and how many found an item with the mega menu vs. the standard menu.

Binary metrics are simple, but they usually can’t fully describe the experience. This is why we routinely collect multiple metrics, both performance and attitudinal. You can measure the time it takes users to submit alternate versions of the forms, or the time it takes to find items using different menus. Rating scales and forced ranking questions are good ways of measuring preferences for downloading apps or choosing a payment solution.

With a clear research hypothesis and some appropriate metrics, the next steps involve collecting data from the right users and analyzing the data statistically to test the hypothesis. Technically we rework our research hypothesis into what’s called the Null Hypothesis, then look for evidence against the Null Hypothesis, usually in the form of the p-value . This is of course a much larger topic we cover in Quantifying the User Experience .

While the process of subjecting data to statistical analysis intimidates many designers and researchers (recalling those school memories again), remember that the hardest and most important part is working with a good testable hypothesis. It takes practice to convert fuzzy business questions into testable hypotheses. Once you’ve got that down, the rest is mechanics that we can help with.

You might also be interested in

102721-Feature

5 rules for creating a good research hypothesis

UserTesting glyph icon

UserTesting

hypotheses user research

A hypothesis is a proposed explanation made on the basis of limited evidence. It is the starting point for further investigation of something that peaks your curiosity.

A good hypothesis is critical to creating a measurable study with successful outcomes. Without one, you’re stumbling through the fog and merely guessing which direction to travel in. It’s an especially critical step in  A/B and Multivariate  testing. 

Every user research study needs clear goals and objectives, and a hypothesis is essential for this to happen. Writing a good hypothesis looks like this:

1: Problem : Think about the problem you’re trying to solve and what you know about it.

2: Question : Consider which questions you want to answer. 

3: Hypothesis : Write your research hypothesis.

4: Goal : State one or two SMART goals for your project (specific, measurable, achievable, relevant, time-bound).

5: Objective : Draft a measurable objective that aligns directly with each goal.

In this article, we will focus on writing your hypothesis.

Five rules for a good hypothesis

1: A hypothesis is your best guess about what will happen. A good hypothesis says, "this change will result in this outcome. The "change" meaning a variation of an element. For example manipulating the label, color, text, etc. The "outcome" is the measure of success or the metric—such as click-through rate, conversion, etc.

2: Your hypothesis may be wrong—just learn from it. The initial hypothesis might be quite bold, such as “Variation B will result in 40% conversion over variation A”. If the conversion uptick is only 35% then your hypothesis is false. But you can still learn from it. 

3: It must be specific. Explicitly stating values are important. Be bold, but not unrealistic. You must believe that what you suggest is indeed possible. When possible, be specific and assign numeric values to your predictions.

4: It must be measurable. The hypothesis must lead to concrete success metrics for the key measure. If you choose to evaluate click through, then measure clicks. If looking for conversion, then measure conversion, even if on a subsequent page. If measuring both, state in the study design which is more important, click through or conversion.

5: It should be repeatable. With a good hypothesis you should be able to run multiple different experiments that test different variants. And when retesting these variants, you should get the same results. If you find that your results are inconsistent, then revaluate prior versions and try a different direction. 

How to structure your hypothesis

Any good hypothesis has two key parts, the variant and the result. 

First, state which variant will be affected. Only state one (A, B ,or C) or the recipe if multivariate (A & B). Be sure that you’ve recorded each version of variant testing in your documentation for clarity. Also, ensure to include detailed descriptions of flows or processes for the purpose of re-testing.

Next,   state the expected outcome. “Variant B will result in a 40% higher rate of course completion.” After the hypothesis, be sure to specifically document the metric that will measure the result - in this case, completion. Leave no ambiguity in your metric. 

Remember, always use a "control" when testing. The control is a factor that will not change during testing. It will be used as a benchmark to compare the results of the variants. The control is generally the current design in use. 

A good hypothesis begins with data. Whether the data is from web analytics, user research, competitive analyses, or your gut, a hypothesis should start with data you want to better understand.

It should make sense, be easy to read without ambiguity, and be based on reality rather than pie-in-the-sky thinking or simply shooting for a company KPI or objectives and key results (OKR). 

The data that results from a hypothesis is incremental and yields small insights to be built over time. 

Hypothesis example

Imagine you are an eccomerce website trying to better understand your customer's journey. Based on data and insights gathered, you noticed that many website visitors are struggling to locate the checkout button at the end of their journey. You find that 30% of visitors abandon the site with items still in the cart. 

You are trying to understand whether changing the checkout icon on your site will increase checkout completion. 

The shopping bag icon is variant A, the shopping cart icon is variant B, and the checkmark is the control (the current icon you are using on your website). 

Hypothesis: The shopping cart icon (variant B) will increase checkout completion by 15%. 

After exposing users to 3 different versions of the site, with the 3 different checkout icons. The data shows... 

  • 55% of visitors shown the checkmark (control), completed their checkout. 
  • 70% of visitors shown the shopping bag icon (variant A), completed their checkout. 
  • 73% of visitors shown the shopping cart icon (variant B), completed their checkout.

The results shows evidence that a change in the icon led to an increase in checkout completion. Now we can take these insights further with statistical testing to see if these differences are statistically significant . Variant B was greater than our control by 18%, but is that difference significant enough to completely abandon the checkmark? Variant A and B both showed an increase, but which is better between the two? This is the beginning of optimizing our site for a seamless customer journey.

Quick tips for creating a good hypothesis

  • Keep it short—just one clear sentence
  • State the variant you believe will “win”  (include screenshots in your doc background)
  • State the metric that will define your winner  (a download, purchase, sign-up … )
  • Avoid adding  attitudinal  metrics with words like  “because”  or  “since”  
  • Always use a control to measure against your variant

Cover illustration for UserTesting's complete guide to testing websites, apps, and prototypes

Get started with experience research

Everything you need to know to effectively plan, conduct, and analyze remote experience research.

In this Article

Get started now

About the author(s)

With UserTesting’s on-demand platform, you uncover ‘the why’ behind customer interactions. In just a few hours, you can capture the critical human insights you need to confidently deliver what your customers want and expect.

Related Blog Posts

UX researchers collaborating in a continuous discovery exercise

Continuous discovery: all your questions answered

Photo of two women sitting at a desk looking at monitor showing a dashboard of ecommerce metrics

5 tips for retailers preparing for the 2024 holiday shopping season

test product description pages

How and why you should test your product description pages

Human understanding. Human experiences.

Get the latest news on events, research, and product launches

Oh no! We're unable to display this form.

Please check that you’re not running an adblocker and if you are please whitelist usertesting.com.

If you’re still having problems please drop us an email .

By submitting the form, I agree to the Privacy Policy and Terms of Use .

The Complete Guide To UX Research (User Research)

hypotheses user research

UX Research is a term that has been trending in the past few years. There's no surprise why it's so popular - User Experience Research is all about understanding your customer and their needs, which can help you greatly improve your conversion rate and user experience on your website. In this article, we're going to provide a complete guide to UX research as well as how to start implementing it in your organisation.Throughout this article we will give you a complete high-level overview of the entire UX Research meaning, supported by more in-depth articles for each topic.

Introduction to UX Research

Wether you're a grizzled UX Researcher who's been in the field for decades or a UX Novice who's just getting started, UX Research is an integral aspect of the UX Design process. Before diving into this article on UX research methods and tools, let's first take some time to break down what UX research actually entails.

Each of these UX Research Methods has its own strengths and weaknesses, so it's important to understand your goals for the UX Research activities you want to complete.

What is UX Research?

UX research begins with UX designers and UX researchers studying the real world needs of users. User Experience Research is a process --it's not just one thing-- that involves collecting data, conducting interviews, usability testing prototypes or website designs with human participants in order to deeply understand what people are looking for when they interact with a product or service.

By using different sorts of user-research techniques you can better understand not only people desires from their product of service, but a deeper human need which can serve as an incredibly powerful opportunity.

There's an incredible amount of different sorts of research methods. Most of them can be divided in two camps: Qualitative and Quantitative Research.

Qualitative research - Understanding needs can be accomplished through observation, in depth interviews and ethnographic studies. Quantitative Research focusses more on the numbers, analysing data and collecting measurable statistics.

Within these two groups there's an incredible amount of research activities such as Card Sorting, Competitive Analysis, User Interviews, Usability Tests, Personas & Customer Journeys and many more. We've created our The Curated List of Research Techniques to always give you an up-to-date overview.

Why is UX Research so important?

When I started my career as a digital designer over 15 years ago, I felt like I was always hired to design the client's idea. Simply translate what they had in their head into a UI without even thinking about changing the user experience. Needless to say: This is a recipe for disaster. An no, this isn't a "Client's don't know anything" story. Nobody knows! At least in the beginning. The client had "the perfect idea" for a new digital feature. The launch date was already set and the development process had to start as soon as possible.

When the feature launched, we expected support might get a few questions or even receive a few thank-you emails. We surely must've affected the user experience somehow!

But that didn't happen. Nothing happened. The feature wasn't used.

Because nobody needed it.

This is exactly what happens when you skip user experience research because you think you're solving a problem that "everybody" has, but nobody really does.

Conducting User Experience research can help you to have a better understanding of your stakeholders and what they need. This is incredibly valuable information from which you can create personas and customer journeys. It doesn't matter if you're creating a new product or service or are improving an existing once.

Five Steps for conducting User Research

Created by Eric Sanders , the Research Learning Spiral provides five main steps for your user research.

  • Objectives: What are the knowledge gaps we need to fill?
  • Hypotheses: What do we think we understand about our users?
  • Methods: Based on time and manpower, what methods should we select?
  • Conduct: Gather data through the selected methods.
  • Synthesize: Fill in the knowledge gaps, prove or disprove our hypotheses, and discover opportunities for our design efforts.

1: Objectives: Define the Problem Statement

A problem statement is a concise description of an issue to be addressed or a condition to be improved upon. It identifies the gap between the current (problem) state and desired (goal) state of a process or product.

Problem statements are the first steps in your research because they help you to understand what's wrong or needs improving. For example, if your product is a mobile app and the problem statement says that customers are having difficulty paying for items within the application, then UX research will lead you (hopefully) down that path. Most likely it will involve some form of usability testing.

Check out this article if you'd like to learn more about Problem Statements.

2: Hypotheses: What we think we know about our user groups

After getting your Problem Statement right, there's one more thing to do before doing any research. Make sure you have created a clear research goal for yourself. How do you identify Research Objectives? By asking questions:

  • Who are we doing this for? The starting point for your personas!
  • What are we doing? What's happening right now? What do our user want? What does the company need?
  • Think about When. If you're creating a project plan, you'll need a timeline. It also helps to keep in mind when people are using your products or service.
  • Where is the logical next step. Where do people use your product? Why there? What limitations are there to that location? Where can you perform research? Where do your users live?
  • Why are we doing this? Why should or shouldn't we be doing this? Why teaches you all about motivations from people and for the project.
  • Last but not least: How? Besides thinking about the research activities itself, think about how people will test a product or feature. How will the user insights (outcome of the research) work be used in the  User Centered Design - and development process?

3: Methods: Choose the right research method

UX research is about exploration, and you want to make sure that your method fits the needs of what you're trying to explore. There are many different methods. In a later chapter we'll go over the most common UX research methods .

For now, all you need to keep in mind that that there are a lot of different ways of doing research.

You definitely don't need to do every type of activity but it would be useful to have a decent understanding of the options you have available, so you pick the right tools for the job.

4. Conduct: Putting in the work

Apply your chosen user research methods to your Hypotheses and Objectives! The various techniques used by the senior product designer in the BTNG Design Process can definitely be overwhelming. The product development process is not a straight line from A to B. UX Researchers often discover new qualitative insights in the user experience due to uncovering new (or incorrect) user needs. So please do understand that UX Design is a lot more than simply creating a design.

5. Synthesise: Evaluating Research Outcome

So you started with your Problem Statement (Objectives), you drafted your hypotheses, chose the top research methods, conducted your research as stated in the research process and now "YOU ARE HERE".

The last step is to Synthesise what you've learned. Start by filling in the knowledge gaps. What unknowns are you now able to answer?

Which of your hypotheses are proven (or disproven)?

And lastly, which new exciting new opportunities did you discover!

Evaluating the outcome of the User Experience Research is an essential part of the work.

Make sure to keep them brief and to-the-point. A good rule of thumb is to include the top three positive comments and the top three problems.

UX Research Methods

Choosing the right ux research method.

Making sure you use the right types of user experience research in any project is essential. Since time and money is always limited, we need to make sure we always get the most bang-for-our-buck. This means we need to pick the UX research method that will give us the most insights as possible for a project.

Three things to keep in mind when making a choice among research methodologies:

  • Stages of the product life cycle - Is it a new or existing product?
  • Quantitative vs. Qualitative - In depth talk directly with people or data?
  • Attitudinal vs. Behavioural - What people say vs what people do

hypotheses user research

Image from Nielsen Norman Group

Most frequently used methods of UX Research

  • Card Sorting: Way before UX Research even was a "thing", psychological research originally used Card Sorting.  With Card Sorting, you try to find out how people group things and what sort of hierarchies they use. The BTNG Research Team is specialised in remote research. So our modern Card Sorting user experience research have a few modern surprises.
  • Usability Testing: Before launching a new feature or product it is important to do user testing. Give them tasks to complete and see how well the prototype works and learn more about user behaviours.
  • Remote Usability Testing: During the COVID-19 lockdown, finding the appropriate ux research methods haven't always been that easy. Luckily, we've adopted plenty of modern solutions that help us with collecting customer feedback even with a remote usability test.
  • Research-Based User Personas: A profile of a fictional character representing a specific stakeholder relevant to your product or service. Combine goals and objections with attitude and personality. The BTNG Research Team creates these personas for the target users after conducing both quantitative and qualitative user research.
  • Field Studies: Yes, we actually like to go outside. What if your product isn't a B2B desktop application which is being used behind a computer during office hours? At BTNG we have different types of Field Studies which all help you gain valuable insights into human behaviour and the user experience.
  • The Expert Interview: Combine your talent with that of one of BTNG's senior researcher. Conducting ux research without talking to the experts on your team would be a waste of time. In every organisation there are people who know a lot about their product or service and have unique insights. We always like to include them in the UX Research!
  • Eye Movement Tracking: If you have an existing digital experience up and running - Eye Movement Tracking can help you to identify user experience challenges in your funnel. The outcome shows a heatmap of where the user looks (and doesn't).

Check out this article for a in-depth guide on UX Research Methods.

Qualitative vs. Quantitative UX research methods

Since this is a topic that we can on about for hours, we decided to split this section up in a few parts. First let's start with the difference.

Qualitative UX Research is based on an in-depth understanding of the human behaviour and needs. Qualitative user research includes interviews, observations (in natural settings), usability tests or contextual inquiry. More than often you'll obtain unexpected, valuable insights through this from of user experience research methods.

Quantitative UX Research relies on statistical analysis to make sense out of data (quantitative data) gathered from UX measurements: A/B Tests - Surveys etc. Quantitative UX Research is as you might have guessed, a lot more data-orientated.

If you'd like to learn more about these two types of research, check out these articles:

Get the most out of your User Research with Qualitative Research

Quantitative Research: The Science of Mining Data for Insights

Balancing qualitative and quantitative UX research

Both types of research have amazing benefits but also challenges. Depending on the research goal, it would be wise to have a good understanding which types of research you would like to be part of the ux design and would make the most impact.

The BTNG Research Team loves to start with Qualitative Research to first get a better understanding of the WHY and gain new insights. To validate these new learning they use Quantitative Research in your user experience research.

A handful of helpful UX Research Tools

The landscape of UX research tools has been growing rapidly. The BTNG Research team use a variety of UX research tools to help with well, almost everything. From running usability tests, creating prototypes and even for recruiting participants.

In the not-too-distant future, we'll create a Curated UX Research Tool article. For now, a handful of helpful UX Research Tools should do the trick.

  • For surveys : Typeform
  • For UX Research Recruitment: Dscout
  • For analytics and heatmaps: VWO
  • For documenting research: Notion & Airtable
  • For Customer Journey Management : TheyDo
  • For transcriptions: Descript
  • For remote user testing: Maze
  • For Calls : Zoom

Surveys: Typeform

What does it do? Survey Forms can be boring. Typeform is one of those ux research tools that helps you to create beautiful surveys with customisable templates and an online editor. For example, you can add videos to your survey or even let people draw their answers instead of typing them in a text box. Who is this for? Startup teams that want to quickly create engaging and modern looking surveys but don't know how to code it themselves.

Highlights: Amazing UX, looks and feel very modern, create forms with ease that match your branding, great reports and automation.

Why is it our top pick? Stop wasting time on ux research tools with too many buttons. Always keep the goal of your ux research methods in mind. Keep things lean, fast and simple with a product with amazing UX.

https://www.typeform.com/

UX Research Recruitment: Dscout

What does it do? Dscout is a remote research platform that helps you recruit participants for your ux research (the right ones). With a pool of +100.000 real users, our user researchers can hop on video calls and collect data for your qualitative user research. So test out those mobile apps user experience and collect all the data! Isn't remote research amazing?

Highlights: User Research Participant Recruitment, Live Sessions,Prototype feedback, competitive analysis, in-the-wild product discovery, field work supplementations, shopalongs.

Why is it our top pick? Finding the right people is more important than finding people fast. BTNG helps corporate clients in all types of industries which require a unique set of users, each time. Dscout helps us to quickly find the right people and make sure our user research is delivered on time and our research process stays in tact.

https://dscout.com/

Analytics and heatmaps: VWO

What does it do? When we were helping the Financial Times, our BTNG Research Team collaborated with FT Marketing Team who were already running experiments with VWO. 50% of the traffic would see one version of a certain page while 50% saw a different version. Which performed best? Perhaps you'd take a look at time-on-page. But more importantly: Which converts better!

Hotjar provides Product Experience Insights that show how users behave and what they feel strongly about, so product teams can deliver real value to them.

Highlights: VWO is an amazing suite that does it all:Automated Feedback, Heatmaps, EyeTracking, User Session Recordings (Participant Tracking) and one thing that Hotjar doesn't do: A/B Testing.

Why is it our top pick? Even tho it's an expensive product, it does give you value for money. Especially the reports with very black and white outcomes are great for presenting the results you've made.

https://vwo.com/

Documenting research: Notion

What does it do? Notion is our command center, where we store and constantly update our studio's aggregate wisdom. It is a super-flexible tool that helps to organise project documentation, prepare for interviews with either clients or their product users, accumulate feedback, or simply take notes.

Highlights: A very clean, structured way to write and share information with your team in a beautiful designed app with an amazing user experience.

Why is it our top pick? There's no better, more structured way to share information.

https://www.notion.so/

Customer Journey Management: TheyDo

What does it do? TheyDo is a modern Journey Management Platform. It centralises your journeys in an easy to manage system, where everyone has access to a single source of truth of the customer experience. It’s like a CMS for journeys.

Highlights: Customer Journey Map designer, Personas and 2x2 Persona Matrix, Opportunity & Solution Management & Prioritisation.

Why is it our top pick? TheyDo fits perfectly with BTNG's way of helping companies become more customer-centric. It helps to visualise the current experience of stakeholders. With those insight which we capture from interviews or usability testing, we discover new opportunities. A perfect starting point for creating solutions!

https://www.theydo.io/

Transcriptions: Descript

What does it do? Descript is an all-in-one solution for audio & video recording, editing and transcription. The editing is as easy as a doc. Imagine you’ve interviewed 20 different people about a new flavor of soda or a feature for your app. You just drop all those files into a Descript Project, and they show up in different “Compositions” (documents) in the sidebar. In a couple of minutes they’ll be transcribed, with speaker labels added automatically.

Highlights: Overdub, Filler Word Removal, Collaboration, Subtitles, Remote Recording and Studio Sound.

Why is it our top pick? Descript is an absolute monster when it comes to recording, editing and transcribing videos. It truly makes digesting the work after recording fast and even fun!

https://www.descript.com/

Remote user testing: Maze

What does it do? Maze is a-mazing remote user testing platform for unmoderated usability tests. With Maze, you can create and run in-depth usability tests and share them with your testers via a link to get actionable insights. Maze also generates a usability study report instantly so that you can share it with anyone.

It’s handy that the tool integrates directly with Figma, InVision, Marvel, and Sketch, thus, you can import a working prototype directly from the design tool you use. The BTNG Design Team with their Figma skills has an amazing chemistry with the Research Team due to that Figma/Maze integration.

Highlights: Besides unmoderated usability testing, Maze can help with different UX Research Methods, like card sorting, tree testing, 5-second testing, A/B testing, and more.

Why is it our top pick? Usability testing has been a time consuming way of qualitative research. Trying to find out how users interact (Task analysis) during an Interviews combined with keeping an eye on the prototype can be... a challenge. The way that Maze allows us to run (besides our hands on usability test) now also run unmoderated usability testing is a powerful weapon in our arsenal.

https://maze.co/

Calls: Zoom

What does it do? As the other video conferencing tools you can run video calls. But what makes Zoom a great tool? We feel that the integration with conferencing equipment is huge for our bigger clients. Now that there's also a Miro integration we can make our user interviews even more fun and interactive!

Highlights: Call Recording, Collaboration tools, Screen Sharing, Free trial, connects to conferencing equipment, host up to 500 people!

Why is it our top pick? Giving the research participants of your user interviews a pleasant experience is so important. Especially when you're looking for qualitative feedback on your ux design, you want to make sure they feel comfortable. And yes, you'll have to start using a paid version - but the user interface of Zoom alone is worth it. Even the Mobile App is really solid.

https://zoom.us/

In Conclusion

No matter what research methodology you rely on if it is qualitative research methods or perhaps quantitative data - keep in mind that user research is an essential part of the Design Process. Not only your UX designer will thank you, but also your users.

In every UX project we've spoken to multiple users - no matter if it was a task analysis, attitudinal research or focus groups... They all had one thing in common:

People thanked us for taking the time to listen to them.

So please, stop thinking about the potential UX research methods you might use in your design process and consider what it REALLY is about:

Solving the right problems for the right people.

And there's only one way to get there: Trying things out, listening, learning and improving.

Looking for help? Reach out!

See the Nielsen Norman Group’s list of user research tips: https://www.nngroup.com/articles/ux-research-cheat-sheet/

Find an extensive range of user research considerations, discussed in Smashing Magazine: https://www.smashingmagazine.com/2018/01/comprehensive-guide-ux-research/

Here’s a convenient and example-rich catalogue of user research tools: https://blog.airtable.com/43-ux-research-tools-for-optimizing-your-product/

Related Posts

hypotheses user research

How to generate UX Insights

hypotheses user research

The importance of User Research

hypotheses user research

How to recruit participants

What is ux research.

hypotheses user research

UX Research Tools

Are you an agency specialized in UX, digital marketing, or growth? Join our Partner Program

Learn / Guides / UX research guide

Back to guides

How to write effective UX research questions (with examples)

Collecting and analyzing real user feedback is essential in delivering an excellent user experience (UX). But not all user research is created equal—and done wrong, it can lead to confusion, miscommunication, and non-actionable results.

Last updated

Reading time.

hypotheses user research

You need to ask the right UX research questions to get the valuable insights necessary to continually optimize your product and generate user delight. 

This article shows you how to write strong UX research questions, ensuring you go beyond guesswork and assumptions . It covers the difference between open- and close-ended research questions, explains how to go about creating your own UX research questions, and provides several examples to get you started.

Use Hotjar to ask your users the right UX research questions

Put your UX research questions to work with Hotjar's Feedback and Survey tools to uncover product experience insights

The different types of UX research questions

Let’s face it, asking the right UX research questions is hard. It’s a skill that takes a lot of practice and can leave even the most seasoned UX researchers drawing a blank.

There are two main categories of UX research questions: open-ended and close-ended, both of which are essential to achieving thorough, high-quality UX research. Qualitative research—based on descriptions and experiences—leans toward open-ended questions, whereas quantitative research leans toward closed-ended questions.

Let’s dive into the differences between them.

Open-ended UX research questions

Open-ended UX research questions are exactly what they sound like: they prompt longer, more free-form responses, rather than asking someone to choose from established possible answers—like multiple-choice tests.

Open questions are easily recognized because they:

Usually begin with how, why, what, describe, or tell me

Can’t be easily answered with just yes or no, or a word or two

Are qualitative rather than quantitative

If there’s a simple fact you’re trying to get to, a closed question would work. For anything involving our complex and messy human nature, open questions are the way to go.

Open-ended research questions aim to discover more about research participants and gather candid user insights, rather than seeking specific answers.

Some examples of UX research that use open-ended questions include:

Usability testing

Diary studies

Persona research

Use case research

Task analysis

Check out a concrete example of an open-ended UX research question in action below. Hotjar’s Survey tool is a perfect way of gathering longer-form user feedback, both on-site and externally.

#Asking on-site open-ended questions with Hotjar Surveys is a great way to gather honest user feedback

Pros and cons of open-ended UX research questions

Like everything in life, open-ended UX research questions have their pros and cons.

Advantages of open-ended questions include:

Detailed, personal answers

Great for storytelling

Good for connecting with people on an emotional level

Helpful to gauge pain points, frustrations, and desires

Researchers usually end up discovering more than initially expected

Less vulnerable to bias

 Drawbacks include:

People find them more difficult to answer than closed-ended questions

More time-consuming for both the researcher and the participant

Can be difficult to conduct with large numbers of people

Can be challenging to dig through and analyze open-ended questions

Closed-ended UX research questions

Close-ended UX research questions have limited possible answers. Participants can respond to them with yes or no, by selecting an option from a list, by ranking or rating, or with a single word.

They’re easy to recognize because they’re similar to classic exam-style questions.

More technical industries might start with closed UX research questions because they want statistical results. Then, we’ll move on to more open questions to see how customers really feel about the software we put together.

While open-ended research questions reveal new or unexpected information, closed-ended research questions work well to test assumptions and answer focused questions. They’re great for situations like:

Surveying a large number of participants

When you want quantitative insights and hard data to create metrics

When you’ve already asked open-ended UX research questions and have narrowed them down into close-ended questions based on your findings

If you’re evaluating something specific so the possible answers are limited

If you’re going to repeat the same study in the future and need uniform questions and answers

Wondering what a closed-ended UX research question might look in real life? The example below shows how Hotjar’s Feedback widgets help UX researchers hear from users 'in the wild' as they navigate.

#Closed-ended UX research questions provide valuable insights and are simple for users to address

The different types of closed-ended questions

There are several different ways to ask close-ended UX research questions, including:

Customer satisfaction (CSAT) surveys

CSAT surveys are closed-ended UX research questions that explore customer satisfaction levels by asking users to rank their experience on some kind of scale, like the happy and angry icons in the image below.

On-site widgets like Hotjar's Feedback tool below excel at gathering quick customer insights without wreaking havoc on the user experience. They’re especially popular on ecommerce sites or after customer service interactions.

#Feedback tools can be fun, too. Keep your product lighthearted and collect quick user feedback with a widget like this one

Net Promoter Score (NPS) surveys

NPS surveys are another powerful type of (mostly) closed-ended UX research questions. They ask customers how likely they are to recommend a company, product, or service to their community. Responses to NPS surveys are used to calculate Net Promoter Score .

NPS surveys split customers into three categories:

Promoters (9-10): Your most enthusiastic, vocal, and loyal customers

Passives (7-8): Ho-hum. They’re more or less satisfied customers but could be susceptible to jumping ship

Detractors (0-6): Dissatisfied customers who are at a high risk of spreading bad reviews

Net Promoter Score is a key metric used to predict business growth, track long-term success, and gauge overall customer satisfaction.

#Asking your customers, 'How likely are you to recommend us to a friend or colleague?' helps calculate Net Promoter Score and gauges user satisfaction

Pro tip: while the most important question to ask in an NPS survey is readiness to recommend, it shouldn’t be the only one. Asking follow-up questions can provide more context and a deeper understanding of the customer experience. Combining Hotjar Feedback widgets with standalone Surveys is a great strategy for tracking NPS through both quick rankings and qualitative feedback.

Pros and cons of closed-ended research questions

Close-ended UX research questions have solid advantages, including:

More measurable data to convert into statistics and metrics

Higher response rates because they’re generally more straightforward for people to answer

Easier to coordinate when surveying a large number of people

Great for evaluating specifics and facts

Little to no irrelevant answers to comb through

Putting the UX researcher in control

But closed-ended questions can be tricky to get right. Their disadvantages include:

Leading participants to response bias

Preventing participants from telling the whole story

The lack of insight into opinions or emotions

Too many possible answers overwhelming participants

Too few possible answers, meaning the 'right' answer for each participant might not be included

How to form your own UX research questions

To create effective UX questions, start by defining your research objectives and hypotheses, which are assumptions you’ll put to the test with user feedback.

Use this tried-and-tested formula to create research hypotheses by filling in the blanks according to your unique user and business goals:

We believe (doing x)

For (x people)

Will achieve (x outcome)

For example: ' We believe adding a progress indicator into our checkout process (for customers) will achieve 20% lower cart abandonment rates.'

Pro tip: research hypotheses aren’t set in stone. Keep them dynamic as you formulate, change, and re-evaluate them throughout the UX research process, until your team comes away with increased certainty about their initial assumption.

When nailing down your hypotheses, remember that research is just as much about discovering new questions as it is about getting answers. Don’t think of research as a validation exercise where you’re looking to confirm something you already know. Instead, cultivate an attitude of exploration and strive to dig deeper into user emotions, needs, and challenges.

Once you have a working hypothesis, identify your UX research objective . Your objective should be linked to your hypothesis, defining what your product team wants to accomplish with your research—for example, ' We want to improve our cart abandonment rates by providing customers with a seamless checkout experience.'

Now that you’ve formulated a hypothesis and research objective, you can create your general or 'big picture' research questions . These define precisely what you want to discover through your research, but they’re not the exact questions you’ll ask participants. This is an important distinction because big picture research questions focus on the researchers themselves rather than users.

A big picture question might be something like: ' How can we improve our cart abandonment rates?'

With a strong hypothesis, objective, and general research question in the bag, you’re finally ready to create the questions you’ll ask participants.

32 examples of inspiring UX research questions

There are countless different categories of UX research questions.

We focus on open-ended, ecommerce-oriented questions here , but with a few tweaks, these could be easily transformed into closed-ended questions.

For example, an open-ended question like, 'Tell us about your overall experience shopping on our website' could be turned into a closed-ended question such as, ' Did you have a positive experience finding everything you needed on our website?'

Screening questions

Screening questions are the first questions you ask UX research participants. They help you get to know your customers and work out whether they fit into your ideal user personas.

These survey question examples focus on demographic and experience-based questions. For instance:

Tell me about yourself. Who are you and what do you do?

What does a typical day look like for you?

How old are you?

What’s the highest level of education that you’ve completed?

How comfortable do you feel using the internet?

How comfortable do you feel browsing or buying products online?

How frequently do you buy products online?

Do you prefer shopping in person or online? Why?

Awareness questions

Awareness questions explore how long your participants have been aware of your brand and how much they know about it. Some good options include:

How did you find out about our brand?

What prompted you to visit our website for the first time?

If you’ve visited our website multiple times, what made you come back?

How long was the gap between finding out about us and your first purchase?

Expectation questions

Expectation questions investigate the assumptions UX research participants have about brands, products, or services before using them. For example:

What was your first impression of our brand?

What was your first impression of X product or service?

How do you think using X product or service would benefit you?

What problem would X product or service solve for you?

Do you think X product or service is similar to another one on the market? Please specify.

Task-specific questions

Task-specific questions focus on user experiences as they complete actions on your site. Some examples include:

Tell us what you thought about the overall website design and content layout

How was your browsing experience?

How was your checkout experience?

What was the easiest task to complete on our website?

What was the hardest task to complete on our website?

Experience questions

Experience questions dig deeper into research participants’ holistic journeys as they navigate your site. These include:

Tell us how you felt when you landed on our website homepage

How can we improve the X page of our website?

What motivated you to purchase X product or service?

What stopped you from purchasing X product or service?

Was your overall experience positive or negative while shopping on our website? Why?

Concluding questions

Concluding questions ask participants to reflect on their overall experience with your brand, product, or service. For instance:

What are your biggest questions about X product or service?

What are your biggest concerns about X product or service?

If you could change one thing about X product or service, what would it be?

Would you recommend X product or service to a friend?

How would you compare X product or service to X competitor?

Excellent research questions are key for an optimal UX

To create a fantastic UX, you need to understand your users on a deeper level.

Crafting strong questions to deploy during the research process is an important way to gain that understanding, because UX research shouldn’t center on what you want to learn but what your users can teach you.

UX research question FAQs

What are ux research questions.

UX research questions can refer to two different things: general UX research questions and UX interview questions. 

Both are vital components of UX research and work together to accomplish the same goals—understanding user needs and pain points, challenging assumptions, discovering new insights, and finding solutions.

General UX research questions focus on what UX researchers want to discover through their study. 

UX interview questions are the exact questions researchers ask participants during their research study.

What are examples of UX research questions?

UX research question examples can be split into several categories. Some of the most popular include:

Screening questions: help get to know research participants better and focus on demographic and experience-based information. For example: “What does a typical day look like for you?”

Awareness questions: explore how much research participants know about your brand, product, or service. For example: “What prompted you to visit our website for the first time?”

Expectation questions: investigate assumptions research participants have about your brand, product, or service. For example: “What was your first impression of X?”

Task-specific questions: dive into participants’ experiences trying to complete actions on your site. For example: “What was the easiest task to complete on our website?”

Experience questions: dig deep into participants’ overall holistic experiences navigating through your site. For example: “Was your overall experience shopping on our website positive or negative? Why?”

Concluding questions: ask participants to reflect on their overall experience with your brand, product, or service. For example: “What are your biggest concerns about (x product or service)?”

What’s the difference between open-ended and closed-ended UX research questions?

The difference between open- and closed-ended UX research questions is simple. Open-ended UX research questions prompt long, free-form responses. They’re qualitative rather than quantitative and can’t be answered easily with yes or no, or a word or two. They’re easy to recognize because they begin with terms like how, why, what, describe, and tell me.

On the other hand, closed-ended UX research questions have limited possible answers. Participants can respond to them with yes or no, by selecting an option from a list, by rating or ranking options, or with just a word or two.

UX research process

Previous chapter

UX research tools

Next chapter

UX Research Cheat Sheet

hypotheses user research

February 12, 2017 2017-02-12

  • Email article
  • Share on LinkedIn
  • Share on Twitter

User-experience research methods are great at producing data and insights, while ongoing activities help get the right things done. Alongside R&D, ongoing UX activities can make everyone’s efforts more effective and valuable. At every stage in the design process, different UX methods can keep product-development efforts on the right track, in agreement with true user needs and not imaginary ones.

In This Article:

When to conduct user research.

One of the questions we get the most is, “When should I do user research on my project?” There are three different answers:

  • Do user research at whatever stage you’re in right now . The earlier the research, the more impact the findings will have on your product, and by definition, the earliest you can do something on your current project (absent a time machine) is today.
  • Do user research at all the stages . As we show below, there’s something useful to learn in every single stage of any reasonable project plan, and each research step will increase the value of your product by more than the cost of the research.
  • Do most user research early in the project (when it’ll have the most impact), but conserve some budget for a smaller amount of supplementary research later in the project. This advice applies in the common case that you can’t get budget for all the research steps that would be useful.

The chart below describes UX methods and activities available in various project stages.

A design cycle often has phases corresponding to discovery, exploration, validation, and listening, which entail design research, user research, and data-gathering activities. UX researchers use both methods and ongoing activities to enhance usability and user experience, as discussed in detail below.

Each project is different, so the stages are not always neatly compartmentalized. The end of one cycle is the beginning of the next.

The important thing is not to execute a giant list of activities in rigid order, but to start somewhere and learn more and more as you go along.

• Field study
• Diary study
• User interview
• Stakeholder interview
• Requirements & constraints gathering
• Competitive analysis
• Design review
• Persona building
• Task analysis
• Journey mapping
• Prototype feedback & testing (clickable or paper prototypes)
• Write user stories
• Card sorting
• Qualitative usability testing (in-person or remote)
• Benchmark testing
• Accessibility evaluation
• Survey
• Analytics review
• Search-log analysis
• Usability-bug review
• Frequently-asked-questions (FAQ) review

When deciding where to start or what to focus on first, use some of these top UX methods. Some methods may be more appropriate than others, depending on time constraints, system maturity, type of product or service, and the current top concerns. It’s a good idea to use different or alternating methods each product cycle because they are aimed at different goals and types of insight. The chart below shows how often UX practitioners reported engaging in these methods in our survey on UX careers.

The top UX research activities that practitioners said they use at least every year or two, from most frequent to least: Task analysis, requirements gathering, in-person usability study, journey mapping, etc., design review, analytics review, clickable prototype testing, write user stories, persona building, surveys, field studies / user interviews, paper prototype testing, accessibility evaluation, competitive analysis, remote usability study, test instructions / help, card sorting, analyze search logs, diary studies

If you can do only one activity and aim to improve an existing system, do qualitative (think-aloud) usability testing , which is the most effective method to improve usability . If you are unable to test with users, analyze as much user data as you can. Data (obtained, for instance, from call logs, searches, or analytics) is not a great substitute for people, however, because data usually tells you what , but you often need to know why . So use the questions your data brings up to continue to push for usability testing.

The discovery stage is when you try to illuminate what you don’t know and better understand what people need. It’s especially important to do discovery activities before making a new product or feature, so you can find out whether it makes sense to do the project at all .

An important goal at this stage is to validate and discard assumptions, and then bring the data and insights to the team. Ideally this research should be done before effort is wasted on building the wrong things or on building things for the wrong people, but it can also be used to get back on track when you’re working with an existing product or service.

Good things to do during discovery:

  • Conduct field studies and interview users : Go where the users are, watch, ask, and listen. Observe people in context interacting with the system or solving the problems you’re trying to provide solutions for.
  • Run diary studies to understand your users’ information needs and behaviors.
  • Interview stakeholders to gather and understand business requirements and constraints.
  • Interview sales, support, and training staff. What are the most frequent problems and questions they hear from users? What are the worst problems people have? What makes people angry?
  • Listen to sales and support calls. What do people ask about? What do they have problems understanding? How do the sales and support staff explain and help? What is the vocabulary mismatch between users and staff?
  • Do competitive testing . Find the strengths and weaknesses in your competitors’ products. Discover what users like best.

Exploration methods are for understanding the problem space and design scope and addressing user needs appropriately.

  • Compare features against competitors.
  • Do design reviews.
  • Use research to build user personas and write user stories.
  • Analyze user tasks to find ways to save people time and effort.
  • Show stakeholders the user journey and where the risky areas are for losing customers along the way. Decide together what an ideal user journey would look like.
  • Explore design possibilities by imagining many different approaches, brainstorming, and testing the best ideas in order to identify best-of-breed design components to retain.
  • Obtain feedback on early-stage task flows by walking through designs with stakeholders and subject-matter experts. Ask for written reactions and questions (silent brainstorming), to avoid groupthink and to enable people who might not speak up in a group to tell you what concerns them.
  • Iterate designs by testing paper prototypes with target users, and then test interactive prototypes by watching people use them. Don’t gather opinions. Instead, note how well designs work to help people complete tasks and avoid errors. Let people show you where the problem areas are, then redesign and test again.
  • Use card sorting to find out how people group your information, to help inform your navigation and information organization scheme.

Testing and validation methods are for checking designs during development and beyond, to make sure systems work well for the people who use them.

  • Do qualitative usability testing . Test early and often with a diverse range of people, alone and in groups. Conduct an accessibility evaluation to ensure universal access.
  • Ask people to self-report their interactions and any interesting incidents while using the system over time, for example with diary studies .
  • Audit training classes and note the topics, questions people ask, and answers given. Test instructions and help systems.
  • Talk with user groups.
  • Staff social-media accounts and talk with users online. Monitor social media for kudos and complaints.
  • Analyze user-forum posts. User forums are sources for important questions to address and answers that solve problems. Bring that learning back to the design and development team.
  • Do benchmark testing: If you’re planning a major redesign or measuring improvement, test to determine time on task, task completion, and error rates of your current system, so you can gauge progress over time.

Listen throughout the research and design cycle to help understand existing problems and to look for new issues. Analyze gathered data and monitor incoming information for patterns and trends.

  • Survey customers and prospective users.
  • Monitor analytics and metrics to discover trends and anomalies and to gauge your progress.
  • Analyze search queries: What do people look for and what do they call it? Search logs are often overlooked, but they contain important information.
  • Make it easy to send in comments, bug reports, and questions. Analyze incoming feedback channels periodically for top usability issues and trouble areas. Look for clues about what people can’t find, their misunderstandings, and any unintended effects.
  • Collect frequently asked questions and try to solve the problems they represent.
  • Run booths at conferences that your customers and users attend so that they can volunteer information and talk with you directly.
  • Give talks and demos: capture questions and concerns.

Ongoing and strategic activities can help you get ahead of problems and make systemic improvements.

  • Find allies . It takes a coordinated effort to achieve design improvement. You’ll need collaborators and champions.
  • Talk with experts . Learn from others’ successes and mistakes. Get advice from people with more experience.
  • Follow ethical guidelines . The UXPA Code of Professional Conduct is a good starting point.
  • Involve stakeholders . Don’t just ask for opinions; get people onboard and contributing, even in small ways. Share your findings, invite them to observe and take notes during research sessions.
  • Hunt for data sources . Be a UX detective. Who has the information you need, and how can you gather it?
  • Determine UX metrics. Find ways to measure how well the system is working for its users.
  • Follow Tog's principles of interaction design .
  • Use evidence-based design guidelines , especially when you can’t conduct your own research. Usability heuristics are high-level principles to follow.
  • Design for universal access . Accessibility can’t be tacked onto the end or tested in during QA. Access is becoming a legal imperative, and expert help is available. Accessibility improvements make systems easier for everyone.
  • Give users control . Provide the controls people need. Choice but not infinite choice.
  • Prevent errors . Whenever an error occurs, consider how it might be eliminated through design change. What may appear to be user errors are often system-design faults. Prevent errors by understanding how they occur and design to lessen their impact.
  • Improve error messages . For remaining errors, don’t just report system state. Say what happened from a user standpoint and explain what to do in terms that are easy for users to understand.
  • Provide helpful defaults . Be prescriptive with the default settings, because many people expect you to make the hard choices for them. Allow users to change the ones they might need or want to change.
  • Check for inconsistencies . Work-alike is important for learnability. People tend to interpret differences as meaningful, so make use of that in your design intentionally rather than introducing arbitrary differences. Adhere to the principle of least astonishment . Meet expectations instead.
  • Map features to needs . User research can be tied to features to show where requirements come from. Such a mapping can help preserve design rationale for the next round or the next team.
  • When designing software, ensure that installation and updating is easy . Make installation quick and unobtrusive. Allow people to control updating if they want to.
  • When designing devices, plan for repair and recycling . Sustainability and reuse are more important than ever. Design for conservation.
  • Avoid waste . Reduce and eliminate nonessential packaging and disposable parts. Avoid wasting people’s time, also. Streamline.
  • Consider system usability in different cultural contexts . You are not your user. Plan how to ensure that your systems work for people in other countries . Translation is only part of the challenge.
  • Look for perverse incentives . Perverse incentives lead to negative unintended consequences. How can people game the system or exploit it? How might you be able to address that? Consider how a malicious user might use the system in unintended ways or to harm others.
  • Consider social implications . How will the system be used in groups of people, by groups of people, or against groups of people? Which problems could emerge from that group activity?
  • Protect personal information . Personal information is like money. You can spend it unwisely only once. Many want to rob the bank. Plan how to keep personal information secure over time. Avoid collecting information that isn’t required, and destroy older data routinely.
  • Keep data safe . Limit access to both research data and the data entrusted to the company by customers. Advocate for encryption of data at rest and secure transport. A data breach is a terrible user experience.
  • Deliver both good and bad news . It’s human nature to be reluctant to tell people what they don’t want to hear, but it’s essential that UX raise the tough issues. The future of the product, or even the company, may depend on decisionmakers knowing what you know or suspect.
  • Track usability over time . Use indicators such as number and types of support issues, error rates and task completion in usability testing, and customer satisfaction ratings, to show the effectiveness of design improvements.
  • Include diverse users . People can be very different culturally and physically. They also have a range of abilities and language skills. Personas are not enough to prevent serious problems, so be sure your testing includes as wide a variety of people as you can.
  • Track usability bugs . If usability bugs don’t have a place in the bug database, start your own database to track important issues.
  • Pay attention to user sentiment . Social media is a great place for monitoring user problems, successes, frustrations, and word-of-mouth advertising. When competitors emerge, social media posts may be the first indication.
  • Reduce the need for training . Training is often a workaround for difficult user interfaces, and it’s expensive. Use training and help topics to look for areas ripe for design changes.
  • Communicate future directions . Customers and users depend on what they are able to do and what they know how to do with the products and services they use. Change can be good, even when disruptive, but surprise changes are often poorly received because they can break things that people are already doing. Whenever possible, ask, tell, test with, and listen to the customers and users you have. Consult with them rather than just announcing changes. Discuss major changes early, so what you hear can help you do a better job, and what they hear can help them prepare for the changes needed.
  • Recruit people for future research and testing . Actively encourage people to join your pool of volunteer testers. Offer incentives for participation and make signing up easy to do via your website, your newsletter, and other points of contact.

Use this cheat-sheet to choose appropriate UX methods and activities for your projects and to get the most out of those efforts. It’s not necessary to do everything on every project, but it’s often helpful to use a mix of methods and tend to some ongoing needs during each iteration.

Free Downloads

Related courses, discovery: building the right thing.

Conduct successful discovery phases to ensure you build the best solution

User Research Methods: From Strategy to Requirements to Design

Pick the best UX research method for each stage in the design process

Personas: Turn User Data Into User-Centered Design

Create, maintain, and utilize personas throughout the UX design process

Related Topics

  • Research Methods Research Methods
  • Design Process

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=7_sFVYfatXY

hypotheses user research

Competitive Reviews vs. Competitive Research

Therese Fessenden · 4 min

hypotheses user research

15 User Research Methods to Know Beyond Usability Testing

Samhita Tankala · 3 min

hypotheses user research

Always Pilot Test User Research Studies

Kim Flaherty · 3 min

Related Articles:

Project Management for User Research: The Plan

Susan Farrell · 7 min

Open-Ended vs. Closed Questions in User Research

Maria Rosala · 5 min

Formative vs. Summative Evaluations

Alita Joyce · 5 min

UX Research Methods: Glossary

Raluca Budiu · 12 min

What a UX Career Looks Like Today

Rachel Krause and Maria Rosala · 5 min

Pilot Testing: Getting It Right (Before) the First Time

Amy Schade · 5 min

  • David Sherwin
  • Sep 23, 2013

A 5-Step Process For Conducting User Research

  • 19 min read
  • UX , Tools , Guidelines , User Research , User Research
  • Share on Twitter ,  LinkedIn

About The Author

David Sherwin is Director of User Experience for lynda.com at LinkedIn , as well as a Fellow at frog . He is the author of Creative Workshop: 80 Challenges to … More about David ↬

Email Newsletter

Weekly tips on front-end & UX . Trusted by 200,000+ folks.

Imagine that this is what you know about me: I am a college-educated male between the ages of 35 and 45. I own a MacBook Pro and an iPhone 5, on which I browse the Internet via the Google Chrome browser. I tweet and blog publicly, where you can discover that I like chocolate and corgis. I’m married. I drive a Toyota Corolla. I have brown hair and brown eyes. My credit-card statement shows where I’ve booked my most recent hotel reservations and where I like to dine out.

If your financial services client provided you with this data, could you tell them why I’ve just decided to move my checking and savings accounts from it to a new bank? This scenario might seem implausible when laid out like this, but you’ve likely been in similar situations as an interactive designer, working with just demographics or website usage metrics.

We can discern plenty of valuable information about a customer from this data, based on what they do and when they do it. That data, however, doesn’t answer the question of why they do it, and how we can design more effective solutions to their problems through our clients’ websites, products and services. We need more context. User research helps to provide that context.

User research helps us to understand how other people live their lives, so that we can respond more effectively to their needs with informed and inspired design solutions. User research also helps us to avoid our own biases, because we frequently have to create design solutions for people who aren’t like us.

So, how does one do user research? Let me share with you a process we use at Frog to plan and conduct user research. It’s called the “research learning spiral.” The spiral was created by Erin Sanders, one of our senior interaction designers and design researchers. It has five distinct steps, which you go through when gathering information from people to fill a gap in your knowledge.

“The spiral is based on a process of learning and need-finding,” Sanders says. “It is built to be replicable and can fit into any part of the design process. It is used to help designers answer questions and overcome obstacles when trying to understand what direction to take when creating or moving a design forward.”

The first three steps of the spiral are about formulating and answering questions, so that you know what you need to learn during your research:

  • Objectives These are the questions we are trying to answer. What do we need to know at this point in the design process? What are the knowledge gaps we need to fill?
  • Hypotheses These are what we believe we already know. What are our team’s assumptions? What do we think we understand about our users, in terms of both their behaviors and our potential solutions to their needs?
  • Methods These address how we plan to fill the gaps in our knowledge. Based on the time and people available, what methods should we select?

Once you’ve answered the questions above and factored them into a one-page research plan that you can present to stakeholders , you can start gathering the knowledge you need through the selected research methods:

  • Conduct Gather data through the methods we’ve selected.
  • Synthesize Answer our research questions, and prove or disprove our hypotheses. Make sense of the data we’ve gathered to discover what opportunities and implications exist for our design efforts.

You already use this process when interacting with people, whether you are consciously conducting research or not. Imagine meeting a group of 12 clients who you have never worked with. You wonder if any of them has done user research before. You believe that only one or two of them have conducted as much user research as you and your team have. You decide to take a quick poll to get an answer to your question, asking everyone in the room to raise their hand if they’ve ever conducted user research. Five of them raise their hands. You ask them to share what types of user research they’ve conducted, jotting down notes on what they’ve done. You then factor this information into your project plan going forward.

In a matter of a few minutes, you’ve gone through the spiral to answer a single question. However, when you’re planning and conducting user research for an interactive project or product, each step you take through the spiral will require more time and energy, based on the depth and quantity of questions you need to answer. So, let’s take an in-depth spin through the research learning spiral. At each step of the spiral, I’ll share some of the activities and tools I use to aid my teams in managing the complexity of planning and conducting user research. I’ll also include a sample project to illustrate how those tools can support your team’s user research efforts.

1. Objectives: The Questions We Are Trying To Answer

Imagine that you’re in the middle of creating a next-generation program guide for TV viewers in Western Europe. Your team is debating whether to incorporate functionality for tablet and mobile users that would enable them to share brief clips from shows that they’re watching to social networks, along with their comments.

“Show clip sharing,” as the team calls it, sounds cool, but you aren’t exactly sure who this feature is for, or why users would want to use it.

Step back from the wireframing and coding, sit down with your team, and quickly discuss what you already know and understand about the product’s goal. To facilitate this discussion, ask your team to generate a series of framing questions to help them identify which gaps in knowledge they need to fill. They would write these questions down on sticky notes, one question per note, to be easily arranged and discussed.

These framing questions would take a “5 Ws and an H” structure, similar to the questions a reporter would need to answer when writing the lede of a newspaper story:

  • “Who?” questions help you to determine prospective audiences for your design work, defining their demographics and psychographics and your baseline recruiting criteria.
  • “What?” questions clarify what people might be doing, as well as what they’re using in your website, application or product.
  • “When?” questions help you to determine the points in time when people might use particular products or technologies, as well as daily routines and rhythms of behavior that might need to be explored.
  • “Where?” questions help you to determine contexts of use — physical locations where people perform certain tasks or use key technologies — as well as potential destinations on the Internet or devices that a user might want to access.
  • “Why?” questions help you to explain the underlying emotional and rational drivers of what a person is doing, and the root reasons for that behavior.
  • “How?” questions help you go into detail on what explicit actions or steps people take in order to perform tasks or reach their goals.

In less than an hour, you and your team can generate a variety of framing questions, such as:

  • “Who would share program clips?”
  • “How frequently would viewers share clips?”
  • “Why would people choose to share clips?”

Debate which questions need to be answered right away and which would be valuable to consider further down the road. “Now is your time to ask the more ‘out there’ questions,” says Lauren Serota, an associate creative director at Frog. “Why are people watching television in the first place? You can always narrow the focus of your questions before you start research… However, the exercise of going lateral and broad is good exercise for your brain and your team.”

When you have a good set of framing questions, you can prioritize and cluster the most important questions, translating them into research objectives. Note that research objectives are not questions. Rather, they are simple statements, such as: “Understand how people in Western Europe who watch at least 20 hours of TV a week choose to share their favorite TV moments.” These research objectives will put up guardrails around your research and appear in your one-page research plan.

Don’t overreach in your objectives. The type of questions you want to answer, and how you phrase them as your research objective, will serve as the scope for your team’s research efforts. A tightly scoped research objective might focus on a specific set of tasks or goals for the users of a given product (“Determine how infrequent TV viewers in Germany decide which programs to record for later viewing”), while a more open-ended research objective might focus more on user attitudes and behaviors, independent of a particular product (“Discover how French students decide how to spend their free time”). You need to be able to reach that objective in the time frame you have alloted for the research.

2. Hypotheses: What We Believe We Already Know

You’ve established the objectives of your research, and your head is already swimming with potential design solutions, which your team has discussed. Can’t you just go execute those ideas and ship them?

If you feel this way, you’re not alone. All designers have early ideas and assumptions about their product. Some clients may have initial hypotheses that they would like “tested” as well.

“Your hypotheses often constitute how you think and feel about the problem you’ve been asked to solve, and they fuel the early stages of work,” says Jon Freach, a design research director at Frog. Don’t be afraid to address these hypotheses and, when appropriate, integrate them into your research process to help you prove or disprove their merit. Here’s why:

  • Externalizing your hypotheses is important to becoming aware of and minimizing the influence of your team’s and client’s biases.
  • Being aware of your hypotheses will help you select the right methods to fulfill your research objective.
  • You can use your early hypotheses to help communicate what you’ve discovered through the research process. (“We believed that [insert hypothesis], but we discovered that [insert finding from research].”)

Generating research hypotheses is easy. Take your framing questions from when you formulated the objective and, as a team, spend five to eight minutes individually sketching answers to them, whether by writing out your ideas on sticky notes, sketching designs and so forth. For example, when thinking about the clip-sharing feature for your next-generation TV program guide, your team members would put their heads together and generate hypotheses such as these:

  • Attitude-related hypothesis . “TV watchers who use social networks like to hear about their friends’ favorite TV shows.”
  • Behavior-related hypothesis . “TV watchers only want to share clips from shows they watch most frequently.”
  • Feature-related hypothesis . “TV watchers are more likely to share a highlight from a show if it’s popular with other viewers as well.”

3. Methods: How We Plan To Fill The Gaps In Our Knowledge

Once you have a defined research objective and a pile of design hypotheses, you’re ready to consider which research methods are most appropriate to achieving your objective. Usually, I’ll combine methods from more than one of the following categories to achieve my research objective. (People have written whole books about this subject. See the end of this article for further reading on user research methods and processes.)

Building A Foundation

Methods in this area could include surveys, observational or contextual interviews, and market and trend explorations. Use these methods when you don’t have a good understanding of the people you are designing for, whether they’re a niche community or a user segment whose behaviors shift rapidly. If you have unanswered questions about your user base — where they go, what they do and why — then you’ll probably have to draw upon methods from this area first.

Generating Inspiration And Ideas

Methods in this area could include diary studies, card sorting, paper prototyping and other participatory design activities. Once I understand my audience’s expertise and beliefs well, I’m ready to delve deeper into what content, functionality or products would best meet their needs. This can be done by generating potential design solutions in close collaboration with research participants, as well as by receiving their feedback on early design hypotheses.

Specifically, we can do this by generating or co-creating sketches, collages, rough interface examples, diagrams and other types of stimuli, as well as by sorting and prioritizing information. These activities will help us understand how our audience views the world and what solutions we can create to fit that view (i.e. “mental models”). This helps to answer our “What,” “Where,” “When” and “How” framing questions. Feedback at this point is not meant to refine any tight design concepts or code prototypes. Instead, it opens up new possibilities.

Evaluating And Informing Design

Methods in this area could include usability testing, heuristic evaluations, cognitive walkthroughs and paper prototyping. Once we’ve identified the functionality or content that’s appropriate for a user, how do we present it to them in a manner that’s useful and delightful? I use methods in this area to refine design comps, simulations and code prototypes. This helps us to answer questions about how users would want to use a product or to perform a key task. This feedback is critical and, as part of an iterative design process, enables us to refine and advance concepts to better meet user needs.

Let’s go back to our hypothetical example, so that you can see how your research objective and hypotheses determine which methods your team will select. Take all of your hypotheses — I like to start with at least 100 hypotheses — and arrange them on a continuum:

On the left, place hypotheses related to who your users are, where they live and work, their goals, their needs and so forth. On the right, place hypotheses that have to do with explicit functionality or design solutions you want to test with users. In the center, place hypotheses related to the types of content or functionality that you think might be relevant to users. This point of this activity is not to create an absolute scale or arrangement of hypotheses that you’ve created so far. The point is for your team to cluster the hypotheses, finding important themes or affinities that will help you to select particular methods. Serota says:

"Choosing and refining your methods and approach is a design project within itself. It takes iteration, practice and time. Test things out on your friends and coworkers to see what works and the best way to ask open-ended questions."

Back to our clip-sharing research effort. When your team looks at all of the hypotheses you’ve created to date, it will realize that using two research methods would be most valuable. The first method will be a participatory design activity, in which you’ll create with users a timeline of where and when they share their favorite TV moments with others. This will give your team foundational knowledge of situations in which clips might be shared, as well as generate opportunities for clip-sharing that you can discuss with users.

The second method will be an evaluative paper-prototyping activity, in which you will present higher-fidelity paper prototypes of ideas on how people can share TV clips. This method will help you address your hypotheses on what solutions make the most sense in sharing situations. (Using two methods is best because mixing and matching hypotheses across different categories within a research session could confuse research participants.)

4. Conduct: Gather Data Through The Methods We’ve Selected

The research plan is done, and you have laid out your early hypotheses on the table. Now you get to conduct the appropriate research methods. Your team will recruit eight users to meet with for one hour each over three evenings, which will allow you to speak with people when they’re most likely to be watching TV. Develop an interview guide and stimuli, and test draft versions of your activities on coworkers. Then, go into the field to conduct your research.

When you do this, it’s essential that you facilitate the research sessions properly, capturing and analyzing the notes, photos, videos and other materials that you collect as you go.

Serota also recommends thinking on your feet: “It’s all right to change course or switch something up in the field. You wouldn’t be learning if you didn’t have to shift at least a little bit.” Ask yourself, “Am I discovering what I need to learn in order to reach my objective? Or am I gathering information that I already know?” If you’re not gaining new knowledge, then one of the following is probably the reason why:

  • You’ve already answered your research questions but haven’t taken the time to formulate new questions and hypotheses in order to dig deeper (otherwise, you could stop conducting research and move immediately into synthesis).
  • The people who you believed were the target audience are, in fact, not. You’ll need to change the recruitment process (and the demographics or psychographics by which you selected them).
  • Your early design hypotheses are a poor fit. So, consider improving them or generating more.
  • The methods you’ve selected are not appropriate. So, adapt or change them.
  • You are spending all of your time in research sessions with users, rather than balancing research sessions with analysis of what you’ve discovered.

5. Synthesis: Answer Our Research Questions, And Prove Or Disprove Our Hypotheses

Now that you’ve gathered research data, it’s time to capture the knowledge required to answer your research questions and to advance your design goals. “In synthesis, you’re trying to find meaning in your data,” says Serota. “This is often a messy process — and can mean reading between the lines and not taking a quote or something observed at face value. The why behind a piece of data is always more important than the what .”

The more time you have for synthesis, the more meaning you can extract from the research data. In the synthesis stage, regularly ask yourself and your team the following questions:

  • “What am I learning?”
  • “Does what I’ve learned change how we should frame the original research objective?”
  • “Did we prove or disprove our hypotheses?”
  • “Is there a pattern in the data that suggests new design considerations?”
  • “What are the implications of what I’m designing?”
  • “What outputs are most important for communicating what we’ve discovered?”
  • “Do I need to change what design activities I plan to do next?”
  • “What gaps in knowledge have I uncovered and might need to research at a later date?”

So, what did your team discover from your research into sharing TV clips? TV watchers do want to share clips from their favorite programs, but they are also just as likely to share clips from programs they don’t watch frequently if they find the clips humorous. They do want to share TV clips with friends in their social networks, but they don’t want to continually spam everyone in their Facebook or Twitter feed. They want to target family, close friends or individuals with certain clips that they, the user believes, would find particularly interesting.

Your team should assemble concise, actionable findings and revise its wireframes to reflect the necessary changes, based on the answers you’ve gathered. Now your team will have more confidence in the solution, and when your designs for the feature have been coded, you’ll take another spin through the research learning spiral to evaluate whether you got it right.

Other Resources On User-Research Practices And Methods

The spiral makes it clear that user research is not simply about card sorting, paper prototyping, usability studies and contextual interviews, per se. Those are just methods that researchers use to find answers to critical questions — answers that fuel their design efforts. Still, understanding what methods are available to you, and mastering those methods, can take some time. Below are some books and websites that will help you dive deeper into the user-research process and methods as part of your professional practice.

  • Observing the User Experience, Second Edition: A Practitioner’s Guide to User Research , Elizabeth Goodman, Mike Kuniavsky and Andrea Moed A comprehensive guide to user research. Goes deep into many of the methods mentioned in this article.
  • Universal Methods of Design , Bruce Hanington and Bella Martin A comprehensive overview of 100 methods that can be employed at various points in the user-research and design process.
  • 101 Design Methods: A Structured Approach for Driving Innovation in Your Organization , Vijay Kumar Places the user research process in the context of product and service innovation.
  • Design Library , Austin Center for Design (AC4D) An in-depth series of PDFs and worksheets that cover processes related to user-research planning, methods and synthesis.

Further Reading

  • Facing Your Fears: Approaching People For Research
  • A Closer Look At Personas
  • The Rainbow Spreadsheet: A Collaborative Lean UX Research Tool
  • How Copywriting Can Benefit From User Research

Smashing Newsletter

Tips on front-end & UX, delivered weekly in your inbox. Just the things you can actually use.

Front-End & UX Workshops, Online

With practical takeaways, live sessions, video recordings and a friendly Q&A.

TypeScript in 50 Lessons

Everything TypeScript, with code walkthroughs and examples. And other printed books.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Strong Hypothesis | Steps & Examples

How to Write a Strong Hypothesis | Steps & Examples

Published on May 6, 2022 by Shona McCombes . Revised on November 20, 2023.

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection .

Example: Hypothesis

Daily apple consumption leads to fewer doctor’s visits.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, other interesting articles, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more types of variables .

  • An independent variable is something the researcher changes or controls.
  • A dependent variable is something the researcher observes and measures.

If there are any control variables , extraneous variables , or confounding variables , be sure to jot those down as you go to minimize the chances that research bias  will affect your results.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Step 1. Ask a question

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2. Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to ensure that you’re embarking on a relevant topic . This can also help you identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalize more complex constructs.

Step 3. Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

4. Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

5. Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in  if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis . The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

  • H 0 : The number of lectures attended by first-year students has no effect on their final exam scores.
  • H 1 : The number of lectures attended by first-year students has a positive effect on their final exam scores.
Research question Hypothesis Null hypothesis
What are the health benefits of eating an apple a day? Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits.
Which airlines have the most delays? Low-cost airlines are more likely to have delays than premium airlines. Low-cost and premium airlines are equally likely to have delays.
Can flexible work arrangements improve job satisfaction? Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. There is no relationship between working hour flexibility and job satisfaction.
How effective is high school sex education at reducing teen pregnancies? Teenagers who received sex education lessons throughout high school will have lower rates of unplanned pregnancy teenagers who did not receive any sex education. High school sex education has no effect on teen pregnancy rates.
What effect does daily use of social media have on the attention span of under-16s? There is a negative between time spent on social media and attention span in under-16s. There is no relationship between social media use and attention span in under-16s.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Prevent plagiarism. Run a free check.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. Retrieved September 4, 2024, from https://www.scribbr.com/methodology/hypothesis/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, construct validity | definition, types, & examples, what is a conceptual framework | tips & examples, operationalization | a guide with examples, pros & cons, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

The Craft of Writing a Strong Hypothesis

Deeptanshu D

Table of Contents

Writing a hypothesis is one of the essential elements of a scientific research paper. It needs to be to the point, clearly communicating what your research is trying to accomplish. A blurry, drawn-out, or complexly-structured hypothesis can confuse your readers. Or worse, the editor and peer reviewers.

A captivating hypothesis is not too intricate. This blog will take you through the process so that, by the end of it, you have a better idea of how to convey your research paper's intent in just one sentence.

What is a Hypothesis?

The first step in your scientific endeavor, a hypothesis, is a strong, concise statement that forms the basis of your research. It is not the same as a thesis statement , which is a brief summary of your research paper .

The sole purpose of a hypothesis is to predict your paper's findings, data, and conclusion. It comes from a place of curiosity and intuition . When you write a hypothesis, you're essentially making an educated guess based on scientific prejudices and evidence, which is further proven or disproven through the scientific method.

The reason for undertaking research is to observe a specific phenomenon. A hypothesis, therefore, lays out what the said phenomenon is. And it does so through two variables, an independent and dependent variable.

The independent variable is the cause behind the observation, while the dependent variable is the effect of the cause. A good example of this is “mixing red and blue forms purple.” In this hypothesis, mixing red and blue is the independent variable as you're combining the two colors at your own will. The formation of purple is the dependent variable as, in this case, it is conditional to the independent variable.

Different Types of Hypotheses‌

Types-of-hypotheses

Types of hypotheses

Some would stand by the notion that there are only two types of hypotheses: a Null hypothesis and an Alternative hypothesis. While that may have some truth to it, it would be better to fully distinguish the most common forms as these terms come up so often, which might leave you out of context.

Apart from Null and Alternative, there are Complex, Simple, Directional, Non-Directional, Statistical, and Associative and casual hypotheses. They don't necessarily have to be exclusive, as one hypothesis can tick many boxes, but knowing the distinctions between them will make it easier for you to construct your own.

1. Null hypothesis

A null hypothesis proposes no relationship between two variables. Denoted by H 0 , it is a negative statement like “Attending physiotherapy sessions does not affect athletes' on-field performance.” Here, the author claims physiotherapy sessions have no effect on on-field performances. Even if there is, it's only a coincidence.

2. Alternative hypothesis

Considered to be the opposite of a null hypothesis, an alternative hypothesis is donated as H1 or Ha. It explicitly states that the dependent variable affects the independent variable. A good  alternative hypothesis example is “Attending physiotherapy sessions improves athletes' on-field performance.” or “Water evaporates at 100 °C. ” The alternative hypothesis further branches into directional and non-directional.

  • Directional hypothesis: A hypothesis that states the result would be either positive or negative is called directional hypothesis. It accompanies H1 with either the ‘<' or ‘>' sign.
  • Non-directional hypothesis: A non-directional hypothesis only claims an effect on the dependent variable. It does not clarify whether the result would be positive or negative. The sign for a non-directional hypothesis is ‘≠.'

3. Simple hypothesis

A simple hypothesis is a statement made to reflect the relation between exactly two variables. One independent and one dependent. Consider the example, “Smoking is a prominent cause of lung cancer." The dependent variable, lung cancer, is dependent on the independent variable, smoking.

4. Complex hypothesis

In contrast to a simple hypothesis, a complex hypothesis implies the relationship between multiple independent and dependent variables. For instance, “Individuals who eat more fruits tend to have higher immunity, lesser cholesterol, and high metabolism.” The independent variable is eating more fruits, while the dependent variables are higher immunity, lesser cholesterol, and high metabolism.

5. Associative and casual hypothesis

Associative and casual hypotheses don't exhibit how many variables there will be. They define the relationship between the variables. In an associative hypothesis, changing any one variable, dependent or independent, affects others. In a casual hypothesis, the independent variable directly affects the dependent.

6. Empirical hypothesis

Also referred to as the working hypothesis, an empirical hypothesis claims a theory's validation via experiments and observation. This way, the statement appears justifiable and different from a wild guess.

Say, the hypothesis is “Women who take iron tablets face a lesser risk of anemia than those who take vitamin B12.” This is an example of an empirical hypothesis where the researcher  the statement after assessing a group of women who take iron tablets and charting the findings.

7. Statistical hypothesis

The point of a statistical hypothesis is to test an already existing hypothesis by studying a population sample. Hypothesis like “44% of the Indian population belong in the age group of 22-27.” leverage evidence to prove or disprove a particular statement.

Characteristics of a Good Hypothesis

Writing a hypothesis is essential as it can make or break your research for you. That includes your chances of getting published in a journal. So when you're designing one, keep an eye out for these pointers:

  • A research hypothesis has to be simple yet clear to look justifiable enough.
  • It has to be testable — your research would be rendered pointless if too far-fetched into reality or limited by technology.
  • It has to be precise about the results —what you are trying to do and achieve through it should come out in your hypothesis.
  • A research hypothesis should be self-explanatory, leaving no doubt in the reader's mind.
  • If you are developing a relational hypothesis, you need to include the variables and establish an appropriate relationship among them.
  • A hypothesis must keep and reflect the scope for further investigations and experiments.

Separating a Hypothesis from a Prediction

Outside of academia, hypothesis and prediction are often used interchangeably. In research writing, this is not only confusing but also incorrect. And although a hypothesis and prediction are guesses at their core, there are many differences between them.

A hypothesis is an educated guess or even a testable prediction validated through research. It aims to analyze the gathered evidence and facts to define a relationship between variables and put forth a logical explanation behind the nature of events.

Predictions are assumptions or expected outcomes made without any backing evidence. They are more fictionally inclined regardless of where they originate from.

For this reason, a hypothesis holds much more weight than a prediction. It sticks to the scientific method rather than pure guesswork. "Planets revolve around the Sun." is an example of a hypothesis as it is previous knowledge and observed trends. Additionally, we can test it through the scientific method.

Whereas "COVID-19 will be eradicated by 2030." is a prediction. Even though it results from past trends, we can't prove or disprove it. So, the only way this gets validated is to wait and watch if COVID-19 cases end by 2030.

Finally, How to Write a Hypothesis

Quick-tips-on-how-to-write-a-hypothesis

Quick tips on writing a hypothesis

1.  Be clear about your research question

A hypothesis should instantly address the research question or the problem statement. To do so, you need to ask a question. Understand the constraints of your undertaken research topic and then formulate a simple and topic-centric problem. Only after that can you develop a hypothesis and further test for evidence.

2. Carry out a recce

Once you have your research's foundation laid out, it would be best to conduct preliminary research. Go through previous theories, academic papers, data, and experiments before you start curating your research hypothesis. It will give you an idea of your hypothesis's viability or originality.

Making use of references from relevant research papers helps draft a good research hypothesis. SciSpace Discover offers a repository of over 270 million research papers to browse through and gain a deeper understanding of related studies on a particular topic. Additionally, you can use SciSpace Copilot , your AI research assistant, for reading any lengthy research paper and getting a more summarized context of it. A hypothesis can be formed after evaluating many such summarized research papers. Copilot also offers explanations for theories and equations, explains paper in simplified version, allows you to highlight any text in the paper or clip math equations and tables and provides a deeper, clear understanding of what is being said. This can improve the hypothesis by helping you identify potential research gaps.

3. Create a 3-dimensional hypothesis

Variables are an essential part of any reasonable hypothesis. So, identify your independent and dependent variable(s) and form a correlation between them. The ideal way to do this is to write the hypothetical assumption in the ‘if-then' form. If you use this form, make sure that you state the predefined relationship between the variables.

In another way, you can choose to present your hypothesis as a comparison between two variables. Here, you must specify the difference you expect to observe in the results.

4. Write the first draft

Now that everything is in place, it's time to write your hypothesis. For starters, create the first draft. In this version, write what you expect to find from your research.

Clearly separate your independent and dependent variables and the link between them. Don't fixate on syntax at this stage. The goal is to ensure your hypothesis addresses the issue.

5. Proof your hypothesis

After preparing the first draft of your hypothesis, you need to inspect it thoroughly. It should tick all the boxes, like being concise, straightforward, relevant, and accurate. Your final hypothesis has to be well-structured as well.

Research projects are an exciting and crucial part of being a scholar. And once you have your research question, you need a great hypothesis to begin conducting research. Thus, knowing how to write a hypothesis is very important.

Now that you have a firmer grasp on what a good hypothesis constitutes, the different kinds there are, and what process to follow, you will find it much easier to write your hypothesis, which ultimately helps your research.

Now it's easier than ever to streamline your research workflow with SciSpace Discover . Its integrated, comprehensive end-to-end platform for research allows scholars to easily discover, write and publish their research and fosters collaboration.

It includes everything you need, including a repository of over 270 million research papers across disciplines, SEO-optimized summaries and public profiles to show your expertise and experience.

If you found these tips on writing a research hypothesis useful, head over to our blog on Statistical Hypothesis Testing to learn about the top researchers, papers, and institutions in this domain.

Frequently Asked Questions (FAQs)

1. what is the definition of hypothesis.

According to the Oxford dictionary, a hypothesis is defined as “An idea or explanation of something that is based on a few known facts, but that has not yet been proved to be true or correct”.

2. What is an example of hypothesis?

The hypothesis is a statement that proposes a relationship between two or more variables. An example: "If we increase the number of new users who join our platform by 25%, then we will see an increase in revenue."

3. What is an example of null hypothesis?

A null hypothesis is a statement that there is no relationship between two variables. The null hypothesis is written as H0. The null hypothesis states that there is no effect. For example, if you're studying whether or not a particular type of exercise increases strength, your null hypothesis will be "there is no difference in strength between people who exercise and people who don't."

4. What are the types of research?

• Fundamental research

• Applied research

• Qualitative research

• Quantitative research

• Mixed research

• Exploratory research

• Longitudinal research

• Cross-sectional research

• Field research

• Laboratory research

• Fixed research

• Flexible research

• Action research

• Policy research

• Classification research

• Comparative research

• Causal research

• Inductive research

• Deductive research

5. How to write a hypothesis?

• Your hypothesis should be able to predict the relationship and outcome.

• Avoid wordiness by keeping it simple and brief.

• Your hypothesis should contain observable and testable outcomes.

• Your hypothesis should be relevant to the research question.

6. What are the 2 types of hypothesis?

• Null hypotheses are used to test the claim that "there is no difference between two groups of data".

• Alternative hypotheses test the claim that "there is a difference between two data groups".

7. Difference between research question and research hypothesis?

A research question is a broad, open-ended question you will try to answer through your research. A hypothesis is a statement based on prior research or theory that you expect to be true due to your study. Example - Research question: What are the factors that influence the adoption of the new technology? Research hypothesis: There is a positive relationship between age, education and income level with the adoption of the new technology.

8. What is plural for hypothesis?

The plural of hypothesis is hypotheses. Here's an example of how it would be used in a statement, "Numerous well-considered hypotheses are presented in this part, and they are supported by tables and figures that are well-illustrated."

9. What is the red queen hypothesis?

The red queen hypothesis in evolutionary biology states that species must constantly evolve to avoid extinction because if they don't, they will be outcompeted by other species that are evolving. Leigh Van Valen first proposed it in 1973; since then, it has been tested and substantiated many times.

10. Who is known as the father of null hypothesis?

The father of the null hypothesis is Sir Ronald Fisher. He published a paper in 1925 that introduced the concept of null hypothesis testing, and he was also the first to use the term itself.

11. When to reject null hypothesis?

You need to find a significant difference between your two populations to reject the null hypothesis. You can determine that by running statistical tests such as an independent sample t-test or a dependent sample t-test. You should reject the null hypothesis if the p-value is less than 0.05.

hypotheses user research

You might also like

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Sumalatha G

Literature Review and Theoretical Framework: Understanding the Differences

Nikhil Seethi

Types of Essays in Academic Writing - Quick Guide (2024)

The User Research and Insights Tool for Design and Product Teams

How to Synthesize User Research Data in 14 Steps

UX research synthesis

When it comes to gaining valuable insights from research, collecting user data is easy. You can create and send out a survey in minutes and schedule user interviews while you’re at it. The real challenge lies in organizing research data and drawing insights that achieve your research objectives.

As you collect data, themes and patterns emerge. The quick and easy road is to look at the emerging trends and conclude based on gut feelings. But personal feelings in research lead to biased recommendations that do not achieve your research goals. 

Data synthesis often happens alongside data analysis, where you break down individual parts of a problem to understand the situation better. The best analysis leads to high-level insight alongside a roadmap for implementation. Moreover, these key insights are reliable because they are based on objective evidence, not bias.

So, how do you turn raw, bulky data into the valuable insight you need?

In this guide, I’ll explain:

  • What is data synthesis and why it’s important
  • Why you should analyze your research data
  • What makes a good insight
  • The common issues with data synthesis
  • How to analyze and synthesize data for the best insights

What Is Data Synthesis, and Why Does It Matter?

Data synthesis combines multiple data sources into a single source of truth to understand the commonalities between individual data points. 

Think of synthesis like a puzzle. Each data point you collect is a separate piece of the puzzle. Individually, they don’t make sense until you put them together to form a complete picture.

UX research synthesis allows you to gain insights and make recommendations for product teams to act on. The ability to turn research into action makes data valuable. 

What’s in a Good Research Insight?

Michael Morgan, the Senior User Experience Researcher at Bloomberg, shares some characteristics of a good insight : 

1. Grounded in Real Data

Just as lawyers gather the evidence before arguing their case before a jury, UX researchers must ensure that the data collected is based on what you observed. Conclude from what you see, not what you feel. However, intuition is not a bad thing when the interpretation of accurate data inspires it.

2. Features Simple Language

Not all stakeholders are researchers. If you want stakeholder buy-in, use language that’s easy for anyone to understand. Using a casual tone and simple language makes your research insight more effective. Also, you can’t present every insight at once, or you’ll overwhelm your audience. Instead, communicate each insight as a standalone point to have more impact.

3. Findings that Speak to the Audience

A compelling insight moves your audience to action. It answers pertinent questions from project stakeholders that shape your primary research goals. In addition, the insight changes the mindset of the product team about the research topic. Some stakeholders may even quote sentences from your research when they find it compelling. You know your research is memorable when stakeholders are quoting insights verbatim.

4. Actionable key points  

The purpose of your insight isn’t only to report findings but to inspire action. If the product and design teams can’t act on your recommendations, there’s still work to be done.

5. Action owners

As a user researcher, how do you communicate critical insights to UX designers, product teams, and other stakeholders in a way that leads to action on your recommendations?

Based on the endowment effect , humans prefer objects they own to those they don’t. Ownership and commitment are the main ingredients of great insight. When stakeholders own insights, there’s a greater chance they’ll follow through with solutions and action.

Common Issues With Synthesizing UX Research Data

Dense information.

Dense information makes it difficult to separate useful information from redundant information. Hence, you’re unable to make a decision and may fall into the trap of regurgitating research data as insight without analysis.

Analyzing dense data requires self-discipline, collaboration skills, and prioritizing the right information. 

Large Volume of Data

It’s time-consuming reading a ton of transcripts and field notes from user research. In addition, a large volume of data makes it challenging to find patterns and remember what’s important. 

I always advise user researchers to upload and organize research data as soon as you collect them. When you organize qualitative research after each session, it’s easier to focus on one data set at a time instead of analyzing everything at once.

Unclear Goals for UX Research

Without setting clear goals for research, it’s easy to get carried away as you start collecting data. Since there’s no structure, the resulting insight isn’t what stakeholders want to see. 

When you lose focus with UX research, find the overlap between your research and what you should be exploring. Otherwise, you may have to repeat the UX research process . 

Contradictory Findings

Contradictory findings are a common occurrence in UX research. For example, people may say one thing but exhibit different behavior when you monitor them. That’s because humans may not always know what they want, even if they believe they do.

You may experience difficulty interpreting participant feedback because the feedback is contradictory. Contradictory findings give space for bias to influence decisions. As a result, you may unconsciously ignore feedback that doesn’t match your expectations, even when it’s correct.

The best way to approach contradictory findings is to analyze your data for complementary evidence rather than validatory points. That way, you are looking at data from a multi-dimensional view instead of expecting all participants to fit a single set of rules.

Should You Trust Your Gut When Conducting Research?

In the hours you spend asking questions, observing behavior, and listening to people, you unconsciously process and form impressions of what you see or hear. Gut feeling (or intuition) is the conclusion you draw from that exposure. 

If you’re able to connect a comment with a pattern in other research data, it’s worth investigating further, right? 

Well, yes, and no.

Your instinct is powerful. It points to problems and possible solutions even before you have conclusive data. But intuition is also prone to bias. 

The best way to approach gut feelings is to analyze first, then follow your instinct. Your intuition becomes more reliable as you explore the information and gain exposure to different contexts. Your intuition should flow from data interpretation, not an assumption. Trust your gut but only when you can validate it with factual evidence.

How to Analyze and Synthesize UX Research Data

1. share inspiring stories.

Use stories to show your ideal persona using the product or fulfilling the goal of your research. The story should answer who, what and why questions of the situation.

Examples of stories to share include stories that:

  • Surprised you
  • Made you curious
  • Verified or refuted your assumptions

Stories build empathy and inspire action. In addition, they give project stakeholders a better understanding of your users’ needs.  

According to the Harvard Business Review, stories cause the brain to release oxytocin , a feel-good chemical associated with empathy and the desire to cooperate.

A few tips to remember when telling user stories include:

  • Make your story relatable to your audience
  • Put the reader in the scene with descriptive details
  • Tell a story that applies to the research topic

2. Define Your Point of View

A point of view (also called user need statement ) is an actionable problem statement that explains the user’s genuine need.

For example, a point of view (POV) states that the user’s need is to digest information, not spend hours on a clunky dashboard. Articulating the need for easily digested information helps product designers build an intuitive dashboard that improves user experience.

Like user stories, POV  should follow the Who-What-Why rule. To create your point of view, aim to uncover the core needs of your audience as it relates to your research topic. However, the POV is not a solution nor an indication of how you’ll satisfy user needs.

3. Frame POV With How Might We Questions

Reframing insight statements as questions provides a framework to brainstorm solutions around your audience’s needs. In addition, how might we (HMW) questions lead to ideation sessions where you explore ideas to solve design challenges innovatively.

Start by framing your POV as several questions by adding “how might we” at the beginning. For example, using our previous POV example, you may come up with the following HMW questions:

  • How might we give users access to all the information they need?
  • How might we present that information in a way that’s easy to digest?

4. Use a UX Research Repository

A user research repository is a central storehouse for UX research data. Instead of keeping data in multiple places, you store and organize your research data in a way that is easily searchable and reusable in the future.

The ease of organization and findability is a crucial feature of a research repository. It means you can apply past insight to future research, which speeds up the UX research process.

benefits of ux research repository

Here are a few tips to make the best use of a UX research repository:

  • Appoint a “library” or repository owner – A specific person should be in charge of running your research repository. If you have an in-house research department, they should handle the research repository.
  • Create an organizational system for your research projects – Have a structure for organizing research data, so it’s easy to find.
  • Add labels to notes, observations, and feedback – Label the data as you collect it in real-time. As a result, you speed up your analysis and build your research library simultaneously.
  • Develop Insights, nuggets, and findings – Record what you’ve learned from the research. Explain the context. Use tags and supporting data to help your audience understand your research.
  • Group, search and share your Insights – Grouping insights provides a system to quickly search and share relevant user research insights with stakeholders.

5. Run Brainstorming Sessions

There’s a common misconception that UX researchers should do all the work independently. The truth is that analysis and data synthesis is a group effort. When you feel stuck, reach out to the product and design team. Run a brainstorming session to get a diverse perspective. Use how might we questions to inspire creativity.

The stakeholders in the brainstorming session should be experts in the research topic or have some background with the subject.

6. Document Interviews 

Documentation is the starting point of a successful data synthesis. Document observations, body language interpretations, recurring patterns, and verbatim quotes that stand out during user interviews . It’s also good practice to take photos and record audio and video of interviews for future reference. 

7. Collect and Organize Data

Qualitative research quickly becomes chaos when you have data all over the place. So, the first step in the analysis is to collect research data and organize them in an easily accessible way. Creating a system for managing files makes it easier to reference and analyze.

Digital organization is the best way to organize files. Use Aurelius to transcribe videos, clips, highlight reels, audio, and hand-written files into digital format. Then, store them under projects with file names like session dates, participant’s name, or any category that works for you. At the end of each usability or interview session, store all files relating to the research in the correct project file.

Without a research taxonomy, it’s easy to miss essential information during synthesis. Instead, organize data by tags, notes, and naming conventions so everything is easy to access.

Here are some steps you can take to organize your data:

  • Invest in a user research repository tool – Notepads and spreadsheets require lots of manual brainpower. It’s easy to miss critical points and difficult to find information when you need it.  On the other hand, UX repository tools like Aurelius are specially designed for storing and analyzing user research data.
  • Take research notes – Jot down important observations during user research sessions to find common themes for organizing data.
  • Organize with tags – Tag common themes to sort data by trends and patterns. UX research tools like Aurelius use artificial intelligence to automatically find themes and tags in your notes without human help.
  • Use consistent file names – Consistent file names makes it easier to find data.

8. Consider a UX Research Taxonomy

Data is helpful beyond the scope of a single research project. UX research taxonomies are a robust tag structure for naming and classifying information within a research repository.

Here are some rules to build an effective UX research taxonomy:

  • Tag your research correctly
  • Create taxonomies based on departments, teams, user challenges, goals, product features, user persona, and more
  • Add relevant tags to your observations to reference each observation with different research scenarios

9. Refer Back to Your Research Goals

Your research goals should guide the analysis process and help you choose useful information. For example, let’s assume you’re designing an illustration toolkit for professional illustrators. Your research goal is to understand the features they want most. By referring back to your goal, you’ll focus on data that explains how your users prefer to create illustrations and what design technique they use the most.

10. Create User Personas

A persona is a fictional representation of an audience group. User personas help you understand the goals, needs, and challenges of your target audience(s).

sample user persona

While personas are fictional, the core qualities are based on research participants you’ve interviewed. The details you add to the personas depend on essential product features. Generally, a user persona contains:

  • Demographic information
  • Behavior and scenario

11. Map User Journeys

User journeys describe the typical situations and tasks users encounter. It also explains the paths they take to accomplish those tasks. User journey illustrates a preferred approach for completing a task and can serve as reference points during design. It can also help you justify decisions to stakeholders by showing the before and after of using your product or solution as part of the user journey.

To create a user journey:

  • Choose a project scope to focus on
  • Define the user persona for the user journey
  • Determine expectations and scenarios for your user
  • Create a list of touchpoints (points of product-user interaction) and the channels associated with them
  • Sketch the journey

12. Use an Empathy Map to Understand the User’s Needs

An empathy map helps you gain a deeper understanding of the user’s needs. It is how you synthesize your observations from research data to draw insight into your user’s needs.

There are four quadrants in an empathy map:

  • What the user said
  • What the user did
  • What the user thought
  • What the user felt during the research session

You create your map by recording each user interaction in its relevant quadrant.

empathy mapping for UX research

13. Look for Patterns With Affinity Diagrams

Affinity diagrams help you organize large amounts of data into groups of similar items to find connections between them quickly.

illustration showing affinity maps in user research data synthesis

Source: Aurelius Lab

Take the following steps to create an affinity diagram:

  • Record all notes on individual cards
  • Look for related patterns in your data
  • Create a group for each pattern and name them
  • Add a key insight describing what you learned from each group

14. Synthesize Data 

During synthesis, you combine results from all your research data to form a fundamental understanding of the topic. You’re looking for connections that highlight ideas and potential solutions to problems. These are patterns that help you develop gut feelings or draw conclusions. 

Use tags to highlight repeatable patterns. Then, describe the insight from themes with key insights. You can quickly achieve this goal with Aurelius.

Tags in Aurelius help you identify patterns across research data such as questions, pain points, and goals. First, you can visualize data as charts to view the most recurring tags in a project. Next, you can form key insights from your tags and notes. 

using Aurelius to synthesize insight

A few ways to use Aurelius for Key Insights include:

  • Highlight text from research notes
  • Draw insight from past and current project
  • Use a tag to describe the insight
  • Add supporting evidence such as highlight reels, documents, notes, video and audio clips to the Key Insight
  • Group insight by various formats such as design, feature, product, or create a custom type in Aurelius
  • Create insights from the Notes, Tags, Reports, and Key Insights Pages

Further reading

Learn how Aurelius improves UX research synthesis for product and design teams

15. Share Your Research Findings

There are many options for sharing and presenting UX research findings. A few options include:

  • Use the automatic report builder in Aurelius to turn insights and recommendations into editable UX reports. You can share these reports as a live link or via PDF
  • Case studies
  • Whitepapers

However, it’s not enough to send emails, PDFs, or links. Decision-makers are busy, and their attention span is short. Instead, use your company’s internal knowledge base to reach stakeholders where they hang out. Send follow-up emails and reach out personally to the executives if you want them to implement your research. 

Research Is Meaningless Without Meaningful Insight

Collecting data is only the first step in research. Without analyzing and synthesizing research data, there’s no way to validate hypotheses or uncover insights that achieve your research goals.

The most valuable insights come from a genuine desire to understand your user and provide value. That desire takes the focus away from product features and reporting on facts to understanding user goals and generating meaningful insights. 

Learn how Aurelius helps you turn research data into actionable insights

Educational resources and simple solutions for your research journey

Research hypothesis: What it is, how to write it, types, and examples

What is a Research Hypothesis: How to Write it, Types, and Examples

hypotheses user research

Any research begins with a research question and a research hypothesis . A research question alone may not suffice to design the experiment(s) needed to answer it. A hypothesis is central to the scientific method. But what is a hypothesis ? A hypothesis is a testable statement that proposes a possible explanation to a phenomenon, and it may include a prediction. Next, you may ask what is a research hypothesis ? Simply put, a research hypothesis is a prediction or educated guess about the relationship between the variables that you want to investigate.  

It is important to be thorough when developing your research hypothesis. Shortcomings in the framing of a hypothesis can affect the study design and the results. A better understanding of the research hypothesis definition and characteristics of a good hypothesis will make it easier for you to develop your own hypothesis for your research. Let’s dive in to know more about the types of research hypothesis , how to write a research hypothesis , and some research hypothesis examples .  

Table of Contents

What is a hypothesis ?  

A hypothesis is based on the existing body of knowledge in a study area. Framed before the data are collected, a hypothesis states the tentative relationship between independent and dependent variables, along with a prediction of the outcome.  

What is a research hypothesis ?  

Young researchers starting out their journey are usually brimming with questions like “ What is a hypothesis ?” “ What is a research hypothesis ?” “How can I write a good research hypothesis ?”   

A research hypothesis is a statement that proposes a possible explanation for an observable phenomenon or pattern. It guides the direction of a study and predicts the outcome of the investigation. A research hypothesis is testable, i.e., it can be supported or disproven through experimentation or observation.     

hypotheses user research

Characteristics of a good hypothesis  

Here are the characteristics of a good hypothesis :  

  • Clearly formulated and free of language errors and ambiguity  
  • Concise and not unnecessarily verbose  
  • Has clearly defined variables  
  • Testable and stated in a way that allows for it to be disproven  
  • Can be tested using a research design that is feasible, ethical, and practical   
  • Specific and relevant to the research problem  
  • Rooted in a thorough literature search  
  • Can generate new knowledge or understanding.  

How to create an effective research hypothesis  

A study begins with the formulation of a research question. A researcher then performs background research. This background information forms the basis for building a good research hypothesis . The researcher then performs experiments, collects, and analyzes the data, interprets the findings, and ultimately, determines if the findings support or negate the original hypothesis.  

Let’s look at each step for creating an effective, testable, and good research hypothesis :  

  • Identify a research problem or question: Start by identifying a specific research problem.   
  • Review the literature: Conduct an in-depth review of the existing literature related to the research problem to grasp the current knowledge and gaps in the field.   
  • Formulate a clear and testable hypothesis : Based on the research question, use existing knowledge to form a clear and testable hypothesis . The hypothesis should state a predicted relationship between two or more variables that can be measured and manipulated. Improve the original draft till it is clear and meaningful.  
  • State the null hypothesis: The null hypothesis is a statement that there is no relationship between the variables you are studying.   
  • Define the population and sample: Clearly define the population you are studying and the sample you will be using for your research.  
  • Select appropriate methods for testing the hypothesis: Select appropriate research methods, such as experiments, surveys, or observational studies, which will allow you to test your research hypothesis .  

Remember that creating a research hypothesis is an iterative process, i.e., you might have to revise it based on the data you collect. You may need to test and reject several hypotheses before answering the research problem.  

How to write a research hypothesis  

When you start writing a research hypothesis , you use an “if–then” statement format, which states the predicted relationship between two or more variables. Clearly identify the independent variables (the variables being changed) and the dependent variables (the variables being measured), as well as the population you are studying. Review and revise your hypothesis as needed.  

An example of a research hypothesis in this format is as follows:  

“ If [athletes] follow [cold water showers daily], then their [endurance] increases.”  

Population: athletes  

Independent variable: daily cold water showers  

Dependent variable: endurance  

You may have understood the characteristics of a good hypothesis . But note that a research hypothesis is not always confirmed; a researcher should be prepared to accept or reject the hypothesis based on the study findings.  

hypotheses user research

Research hypothesis checklist  

Following from above, here is a 10-point checklist for a good research hypothesis :  

  • Testable: A research hypothesis should be able to be tested via experimentation or observation.  
  • Specific: A research hypothesis should clearly state the relationship between the variables being studied.  
  • Based on prior research: A research hypothesis should be based on existing knowledge and previous research in the field.  
  • Falsifiable: A research hypothesis should be able to be disproven through testing.  
  • Clear and concise: A research hypothesis should be stated in a clear and concise manner.  
  • Logical: A research hypothesis should be logical and consistent with current understanding of the subject.  
  • Relevant: A research hypothesis should be relevant to the research question and objectives.  
  • Feasible: A research hypothesis should be feasible to test within the scope of the study.  
  • Reflects the population: A research hypothesis should consider the population or sample being studied.  
  • Uncomplicated: A good research hypothesis is written in a way that is easy for the target audience to understand.  

By following this research hypothesis checklist , you will be able to create a research hypothesis that is strong, well-constructed, and more likely to yield meaningful results.  

Research hypothesis: What it is, how to write it, types, and examples

Types of research hypothesis  

Different types of research hypothesis are used in scientific research:  

1. Null hypothesis:

A null hypothesis states that there is no change in the dependent variable due to changes to the independent variable. This means that the results are due to chance and are not significant. A null hypothesis is denoted as H0 and is stated as the opposite of what the alternative hypothesis states.   

Example: “ The newly identified virus is not zoonotic .”  

2. Alternative hypothesis:

This states that there is a significant difference or relationship between the variables being studied. It is denoted as H1 or Ha and is usually accepted or rejected in favor of the null hypothesis.  

Example: “ The newly identified virus is zoonotic .”  

3. Directional hypothesis :

This specifies the direction of the relationship or difference between variables; therefore, it tends to use terms like increase, decrease, positive, negative, more, or less.   

Example: “ The inclusion of intervention X decreases infant mortality compared to the original treatment .”   

4. Non-directional hypothesis:

While it does not predict the exact direction or nature of the relationship between the two variables, a non-directional hypothesis states the existence of a relationship or difference between variables but not the direction, nature, or magnitude of the relationship. A non-directional hypothesis may be used when there is no underlying theory or when findings contradict previous research.  

Example, “ Cats and dogs differ in the amount of affection they express .”  

5. Simple hypothesis :

A simple hypothesis only predicts the relationship between one independent and another independent variable.  

Example: “ Applying sunscreen every day slows skin aging .”  

6 . Complex hypothesis :

A complex hypothesis states the relationship or difference between two or more independent and dependent variables.   

Example: “ Applying sunscreen every day slows skin aging, reduces sun burn, and reduces the chances of skin cancer .” (Here, the three dependent variables are slowing skin aging, reducing sun burn, and reducing the chances of skin cancer.)  

7. Associative hypothesis:  

An associative hypothesis states that a change in one variable results in the change of the other variable. The associative hypothesis defines interdependency between variables.  

Example: “ There is a positive association between physical activity levels and overall health .”  

8 . Causal hypothesis:

A causal hypothesis proposes a cause-and-effect interaction between variables.  

Example: “ Long-term alcohol use causes liver damage .”  

Note that some of the types of research hypothesis mentioned above might overlap. The types of hypothesis chosen will depend on the research question and the objective of the study.  

hypotheses user research

Research hypothesis examples  

Here are some good research hypothesis examples :  

“The use of a specific type of therapy will lead to a reduction in symptoms of depression in individuals with a history of major depressive disorder.”  

“Providing educational interventions on healthy eating habits will result in weight loss in overweight individuals.”  

“Plants that are exposed to certain types of music will grow taller than those that are not exposed to music.”  

“The use of the plant growth regulator X will lead to an increase in the number of flowers produced by plants.”  

Characteristics that make a research hypothesis weak are unclear variables, unoriginality, being too general or too vague, and being untestable. A weak hypothesis leads to weak research and improper methods.   

Some bad research hypothesis examples (and the reasons why they are “bad”) are as follows:  

“This study will show that treatment X is better than any other treatment . ” (This statement is not testable, too broad, and does not consider other treatments that may be effective.)  

“This study will prove that this type of therapy is effective for all mental disorders . ” (This statement is too broad and not testable as mental disorders are complex and different disorders may respond differently to different types of therapy.)  

“Plants can communicate with each other through telepathy . ” (This statement is not testable and lacks a scientific basis.)  

Importance of testable hypothesis  

If a research hypothesis is not testable, the results will not prove or disprove anything meaningful. The conclusions will be vague at best. A testable hypothesis helps a researcher focus on the study outcome and understand the implication of the question and the different variables involved. A testable hypothesis helps a researcher make precise predictions based on prior research.  

To be considered testable, there must be a way to prove that the hypothesis is true or false; further, the results of the hypothesis must be reproducible.  

Research hypothesis: What it is, how to write it, types, and examples

Frequently Asked Questions (FAQs) on research hypothesis  

1. What is the difference between research question and research hypothesis ?  

A research question defines the problem and helps outline the study objective(s). It is an open-ended statement that is exploratory or probing in nature. Therefore, it does not make predictions or assumptions. It helps a researcher identify what information to collect. A research hypothesis , however, is a specific, testable prediction about the relationship between variables. Accordingly, it guides the study design and data analysis approach.

2. When to reject null hypothesis ?

A null hypothesis should be rejected when the evidence from a statistical test shows that it is unlikely to be true. This happens when the test statistic (e.g., p -value) is less than the defined significance level (e.g., 0.05). Rejecting the null hypothesis does not necessarily mean that the alternative hypothesis is true; it simply means that the evidence found is not compatible with the null hypothesis.  

3. How can I be sure my hypothesis is testable?  

A testable hypothesis should be specific and measurable, and it should state a clear relationship between variables that can be tested with data. To ensure that your hypothesis is testable, consider the following:  

  • Clearly define the key variables in your hypothesis. You should be able to measure and manipulate these variables in a way that allows you to test the hypothesis.  
  • The hypothesis should predict a specific outcome or relationship between variables that can be measured or quantified.   
  • You should be able to collect the necessary data within the constraints of your study.  
  • It should be possible for other researchers to replicate your study, using the same methods and variables.   
  • Your hypothesis should be testable by using appropriate statistical analysis techniques, so you can draw conclusions, and make inferences about the population from the sample data.  
  • The hypothesis should be able to be disproven or rejected through the collection of data.  

4. How do I revise my research hypothesis if my data does not support it?  

If your data does not support your research hypothesis , you will need to revise it or develop a new one. You should examine your data carefully and identify any patterns or anomalies, re-examine your research question, and/or revisit your theory to look for any alternative explanations for your results. Based on your review of the data, literature, and theories, modify your research hypothesis to better align it with the results you obtained. Use your revised hypothesis to guide your research design and data collection. It is important to remain objective throughout the process.  

5. I am performing exploratory research. Do I need to formulate a research hypothesis?  

As opposed to “confirmatory” research, where a researcher has some idea about the relationship between the variables under investigation, exploratory research (or hypothesis-generating research) looks into a completely new topic about which limited information is available. Therefore, the researcher will not have any prior hypotheses. In such cases, a researcher will need to develop a post-hoc hypothesis. A post-hoc research hypothesis is generated after these results are known.  

6. How is a research hypothesis different from a research question?

A research question is an inquiry about a specific topic or phenomenon, typically expressed as a question. It seeks to explore and understand a particular aspect of the research subject. In contrast, a research hypothesis is a specific statement or prediction that suggests an expected relationship between variables. It is formulated based on existing knowledge or theories and guides the research design and data analysis.

7. Can a research hypothesis change during the research process?

Yes, research hypotheses can change during the research process. As researchers collect and analyze data, new insights and information may emerge that require modification or refinement of the initial hypotheses. This can be due to unexpected findings, limitations in the original hypotheses, or the need to explore additional dimensions of the research topic. Flexibility is crucial in research, allowing for adaptation and adjustment of hypotheses to align with the evolving understanding of the subject matter.

8. How many hypotheses should be included in a research study?

The number of research hypotheses in a research study varies depending on the nature and scope of the research. It is not necessary to have multiple hypotheses in every study. Some studies may have only one primary hypothesis, while others may have several related hypotheses. The number of hypotheses should be determined based on the research objectives, research questions, and the complexity of the research topic. It is important to ensure that the hypotheses are focused, testable, and directly related to the research aims.

9. Can research hypotheses be used in qualitative research?

Yes, research hypotheses can be used in qualitative research, although they are more commonly associated with quantitative research. In qualitative research, hypotheses may be formulated as tentative or exploratory statements that guide the investigation. Instead of testing hypotheses through statistical analysis, qualitative researchers may use the hypotheses to guide data collection and analysis, seeking to uncover patterns, themes, or relationships within the qualitative data. The emphasis in qualitative research is often on generating insights and understanding rather than confirming or rejecting specific research hypotheses through statistical testing.

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

Back to school 2024 sale

Back to School – Lock-in All Access Pack for a Year at the Best Price

journal turnaround time

Journal Turnaround Time: Researcher.Life and Scholarly Intelligence Join Hands to Empower Researchers with Publication Time Insights 

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • How to Write a Strong Hypothesis | Guide & Examples

How to Write a Strong Hypothesis | Guide & Examples

Published on 6 May 2022 by Shona McCombes .

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more variables . An independent variable is something the researcher changes or controls. A dependent variable is something the researcher observes and measures.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism, run a free check.

Step 1: ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2: Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalise more complex constructs.

Step 3: Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

Step 4: Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

Step 5: Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

Step 6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis. The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

Research question Hypothesis Null hypothesis
What are the health benefits of eating an apple a day? Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits.
Which airlines have the most delays? Low-cost airlines are more likely to have delays than premium airlines. Low-cost and premium airlines are equally likely to have delays.
Can flexible work arrangements improve job satisfaction? Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. There is no relationship between working hour flexibility and job satisfaction.
How effective is secondary school sex education at reducing teen pregnancies? Teenagers who received sex education lessons throughout secondary school will have lower rates of unplanned pregnancy than teenagers who did not receive any sex education. Secondary school sex education has no effect on teen pregnancy rates.
What effect does daily use of social media have on the attention span of under-16s? There is a negative correlation between time spent on social media and attention span in under-16s. There is no relationship between social media use and attention span in under-16s.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, May 06). How to Write a Strong Hypothesis | Guide & Examples. Scribbr. Retrieved 3 September 2024, from https://www.scribbr.co.uk/research-methods/hypothesis-writing/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, operationalisation | a guide with examples, pros & cons, what is a conceptual framework | tips & examples, a quick guide to experimental design | 5 steps & examples.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.34(45); 2019 Nov 25

Logo of jkms

Scientific Hypotheses: Writing, Promoting, and Predicting Implications

Armen yuri gasparyan.

1 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK.

Lilit Ayvazyan

2 Department of Medical Chemistry, Yerevan State Medical University, Yerevan, Armenia.

Ulzhan Mukanova

3 Department of Surgical Disciplines, South Kazakhstan Medical Academy, Shymkent, Kazakhstan.

Marlen Yessirkepov

4 Department of Biology and Biochemistry, South Kazakhstan Medical Academy, Shymkent, Kazakhstan.

George D. Kitas

5 Arthritis Research UK Epidemiology Unit, University of Manchester, Manchester, UK.

Scientific hypotheses are essential for progress in rapidly developing academic disciplines. Proposing new ideas and hypotheses require thorough analyses of evidence-based data and predictions of the implications. One of the main concerns relates to the ethical implications of the generated hypotheses. The authors may need to outline potential benefits and limitations of their suggestions and target widely visible publication outlets to ignite discussion by experts and start testing the hypotheses. Not many publication outlets are currently welcoming hypotheses and unconventional ideas that may open gates to criticism and conservative remarks. A few scholarly journals guide the authors on how to structure hypotheses. Reflecting on general and specific issues around the subject matter is often recommended for drafting a well-structured hypothesis article. An analysis of influential hypotheses, presented in this article, particularly Strachan's hygiene hypothesis with global implications in the field of immunology and allergy, points to the need for properly interpreting and testing new suggestions. Envisaging the ethical implications of the hypotheses should be considered both by authors and journal editors during the writing and publishing process.

INTRODUCTION

We live in times of digitization that radically changes scientific research, reporting, and publishing strategies. Researchers all over the world are overwhelmed with processing large volumes of information and searching through numerous online platforms, all of which make the whole process of scholarly analysis and synthesis complex and sophisticated.

Current research activities are diversifying to combine scientific observations with analysis of facts recorded by scholars from various professional backgrounds. 1 Citation analyses and networking on social media are also becoming essential for shaping research and publishing strategies globally. 2 Learning specifics of increasingly interdisciplinary research studies and acquiring information facilitation skills aid researchers in formulating innovative ideas and predicting developments in interrelated scientific fields.

Arguably, researchers are currently offered more opportunities than in the past for generating new ideas by performing their routine laboratory activities, observing individual cases and unusual developments, and critically analyzing published scientific facts. What they need at the start of their research is to formulate a scientific hypothesis that revisits conventional theories, real-world processes, and related evidence to propose new studies and test ideas in an ethical way. 3 Such a hypothesis can be of most benefit if published in an ethical journal with wide visibility and exposure to relevant online databases and promotion platforms.

Although hypotheses are crucially important for the scientific progress, only few highly skilled researchers formulate and eventually publish their innovative ideas per se . Understandably, in an increasingly competitive research environment, most authors would prefer to prioritize their ideas by discussing and conducting tests in their own laboratories or clinical departments, and publishing research reports afterwards. However, there are instances when simple observations and research studies in a single center are not capable of explaining and testing new groundbreaking ideas. Formulating hypothesis articles first and calling for multicenter and interdisciplinary research can be a solution in such instances, potentially launching influential scientific directions, if not academic disciplines.

The aim of this article is to overview the importance and implications of infrequently published scientific hypotheses that may open new avenues of thinking and research.

Despite the seemingly established views on innovative ideas and hypotheses as essential research tools, no structured definition exists to tag the term and systematically track related articles. In 1973, the Medical Subject Heading (MeSH) of the U.S. National Library of Medicine introduced “Research Design” as a structured keyword that referred to the importance of collecting data and properly testing hypotheses, and indirectly linked the term to ethics, methods and standards, among many other subheadings.

One of the experts in the field defines “hypothesis” as a well-argued analysis of available evidence to provide a realistic (scientific) explanation of existing facts, fill gaps in public understanding of sophisticated processes, and propose a new theory or a test. 4 A hypothesis can be proven wrong partially or entirely. However, even such an erroneous hypothesis may influence progress in science by initiating professional debates that help generate more realistic ideas. The main ethical requirement for hypothesis authors is to be honest about the limitations of their suggestions. 5

EXAMPLES OF INFLUENTIAL SCIENTIFIC HYPOTHESES

Daily routine in a research laboratory may lead to groundbreaking discoveries provided the daily accounts are comprehensively analyzed and reproduced by peers. The discovery of penicillin by Sir Alexander Fleming (1928) can be viewed as a prime example of such discoveries that introduced therapies to treat staphylococcal and streptococcal infections and modulate blood coagulation. 6 , 7 Penicillin got worldwide recognition due to the inventor's seminal works published by highly prestigious and widely visible British journals, effective ‘real-world’ antibiotic therapy of pneumonia and wounds during World War II, and euphoric media coverage. 8 In 1945, Fleming, Florey and Chain got a much deserved Nobel Prize in Physiology or Medicine for the discovery that led to the mass production of the wonder drug in the U.S. and ‘real-world practice’ that tested the use of penicillin. What remained globally unnoticed is that Zinaida Yermolyeva, the outstanding Soviet microbiologist, created the Soviet penicillin, which turned out to be more effective than the Anglo-American penicillin and entered mass production in 1943; that year marked the turning of the tide of the Great Patriotic War. 9 One of the reasons of the widely unnoticed discovery of Zinaida Yermolyeva is that her works were published exclusively by local Russian (Soviet) journals.

The past decades have been marked by an unprecedented growth of multicenter and global research studies involving hundreds and thousands of human subjects. This trend is shaped by an increasing number of reports on clinical trials and large cohort studies that create a strong evidence base for practice recommendations. Mega-studies may help generate and test large-scale hypotheses aiming to solve health issues globally. Properly designed epidemiological studies, for example, may introduce clarity to the hygiene hypothesis that was originally proposed by David Strachan in 1989. 10 David Strachan studied the epidemiology of hay fever in a cohort of 17,414 British children and concluded that declining family size and improved personal hygiene had reduced the chances of cross infections in families, resulting in epidemics of atopic disease in post-industrial Britain. Over the past four decades, several related hypotheses have been proposed to expand the potential role of symbiotic microorganisms and parasites in the development of human physiological immune responses early in life and protection from allergic and autoimmune diseases later on. 11 , 12 Given the popularity and the scientific importance of the hygiene hypothesis, it was introduced as a MeSH term in 2012. 13

Hypotheses can be proposed based on an analysis of recorded historic events that resulted in mass migrations and spreading of certain genetic diseases. As a prime example, familial Mediterranean fever (FMF), the prototype periodic fever syndrome, is believed to spread from Mesopotamia to the Mediterranean region and all over Europe due to migrations and religious prosecutions millennia ago. 14 Genetic mutations spearing mild clinical forms of FMF are hypothesized to emerge and persist in the Mediterranean region as protective factors against more serious infectious diseases, particularly tuberculosis, historically common in that part of the world. 15 The speculations over the advantages of carrying the MEditerranean FeVer (MEFV) gene are further strengthened by recorded low mortality rates from tuberculosis among FMF patients of different nationalities living in Tunisia in the first half of the 20th century. 16

Diagnostic hypotheses shedding light on peculiarities of diseases throughout the history of mankind can be formulated using artefacts, particularly historic paintings. 17 Such paintings may reveal joint deformities and disfigurements due to rheumatic diseases in individual subjects. A series of paintings with similar signs of pathological conditions interpreted in a historic context may uncover mysteries of epidemics of certain diseases, which is the case with Ruben's paintings depicting signs of rheumatic hands and making some doctors to believe that rheumatoid arthritis was common in Europe in the 16th and 17th century. 18

WRITING SCIENTIFIC HYPOTHESES

There are author instructions of a few journals that specifically guide how to structure, format, and make submissions categorized as hypotheses attractive. One of the examples is presented by Med Hypotheses , the flagship journal in its field with more than four decades of publishing and influencing hypothesis authors globally. However, such guidance is not based on widely discussed, implemented, and approved reporting standards, which are becoming mandatory for all scholarly journals.

Generating new ideas and scientific hypotheses is a sophisticated task since not all researchers and authors are skilled to plan, conduct, and interpret various research studies. Some experience with formulating focused research questions and strong working hypotheses of original research studies is definitely helpful for advancing critical appraisal skills. However, aspiring authors of scientific hypotheses may need something different, which is more related to discerning scientific facts, pooling homogenous data from primary research works, and synthesizing new information in a systematic way by analyzing similar sets of articles. To some extent, this activity is reminiscent of writing narrative and systematic reviews. As in the case of reviews, scientific hypotheses need to be formulated on the basis of comprehensive search strategies to retrieve all available studies on the topics of interest and then synthesize new information selectively referring to the most relevant items. One of the main differences between scientific hypothesis and review articles relates to the volume of supportive literature sources ( Table 1 ). In fact, hypothesis is usually formulated by referring to a few scientific facts or compelling evidence derived from a handful of literature sources. 19 By contrast, reviews require analyses of a large number of published documents retrieved from several well-organized and evidence-based databases in accordance with predefined search strategies. 20 , 21 , 22

CharacteristicsHypothesisNarrative reviewSystematic review
Authors and contributorsAny researcher with interest in the topicUsually seasoned authors with vast experience in the subjectAny researcher with interest in the topic; information facilitators as contributors
RegistrationNot requiredNot requiredRegistration of the protocol with the PROSPERO registry ( ) is required to avoid redundancies
Reporting standardsNot availableNot availablePreferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standard ( )
Search strategySearches through credible databases to retrieve items supporting and opposing the innovative ideasSearches through multidisciplinary and specialist databases to comprehensively cover the subjectStrict search strategy through evidence-based databases to retrieve certain type of articles (e.g., reports on trials and cohort studies) with inclusion and exclusion criteria and flowcharts of searches and selection of the required articles
StructureSections to cover general and specific knowledge on the topic, research design to test the hypothesis, and its ethical implicationsSections are chosen by the authors, depending on the topicIntroduction, Methods, Results and Discussion (IMRAD)
Search tools for analysesNot availableNot availablePopulation, Intervention, Comparison, Outcome (Study Design) (PICO, PICOS)
ReferencesLimited numberExtensive listLimited number
Target journalsHandful of hypothesis journalsNumerousNumerous
Publication ethics issuesUnethical statements and ideas in substandard journals‘Copy-and-paste’ writing in some reviewsRedundancy of some nonregistered systematic reviews
Citation impactLow (with some exceptions)HighModerate

The format of hypotheses, especially the implications part, may vary widely across disciplines. Clinicians may limit their suggestions to the clinical manifestations of diseases, outcomes, and management strategies. Basic and laboratory scientists analysing genetic, molecular, and biochemical mechanisms may need to view beyond the frames of their narrow fields and predict social and population-based implications of the proposed ideas. 23

Advanced writing skills are essential for presenting an interesting theoretical article which appeals to the global readership. Merely listing opposing facts and ideas, without proper interpretation and analysis, may distract the experienced readers. The essence of a great hypothesis is a story behind the scientific facts and evidence-based data.

ETHICAL IMPLICATIONS

The authors of hypotheses substantiate their arguments by referring to and discerning rational points from published articles that might be overlooked by others. Their arguments may contradict the established theories and practices, and pose global ethical issues, particularly when more or less efficient medical technologies and public health interventions are devalued. The ethical issues may arise primarily because of the careless references to articles with low priorities, inadequate and apparently unethical methodologies, and concealed reporting of negative results. 24 , 25

Misinterpretation and misunderstanding of the published ideas and scientific hypotheses may complicate the issue further. For example, Alexander Fleming, whose innovative ideas of penicillin use to kill susceptible bacteria saved millions of lives, warned of the consequences of uncontrolled prescription of the drug. The issue of antibiotic resistance had emerged within the first ten years of penicillin use on a global scale due to the overprescription that affected the efficacy of antibiotic therapies, with undesirable consequences for millions. 26

The misunderstanding of the hygiene hypothesis that primarily aimed to shed light on the role of the microbiome in allergic and autoimmune diseases resulted in decline of public confidence in hygiene with dire societal implications, forcing some experts to abandon the original idea. 27 , 28 Although that hypothesis is unrelated to the issue of vaccinations, the public misunderstanding has resulted in decline of vaccinations at a time of upsurge of old and new infections.

A number of ethical issues are posed by the denial of the viral (human immunodeficiency viruses; HIV) hypothesis of acquired Immune deficiency Syndrome (AIDS) by Peter Duesberg, who overviewed the links between illicit recreational drugs and antiretroviral therapies with AIDS and refuted the etiological role of HIV. 29 That controversial hypothesis was rejected by several journals, but was eventually published without external peer review at Med Hypotheses in 2010. The publication itself raised concerns of the unconventional editorial policy of the journal, causing major perturbations and more scrutinized publishing policies by journals processing hypotheses.

WHERE TO PUBLISH HYPOTHESES

Although scientific authors are currently well informed and equipped with search tools to draft evidence-based hypotheses, there are still limited quality publication outlets calling for related articles. The journal editors may be hesitant to publish articles that do not adhere to any research reporting guidelines and open gates for harsh criticism of unconventional and untested ideas. Occasionally, the editors opting for open-access publishing and upgrading their ethics regulations launch a section to selectively publish scientific hypotheses attractive to the experienced readers. 30 However, the absence of approved standards for this article type, particularly no mandate for outlining potential ethical implications, may lead to publication of potentially harmful ideas in an attractive format.

A suggestion of simultaneously publishing multiple or alternative hypotheses to balance the reader views and feedback is a potential solution for the mainstream scholarly journals. 31 However, that option alone is hardly applicable to emerging journals with unconventional quality checks and peer review, accumulating papers with multiple rejections by established journals.

A large group of experts view hypotheses with improbable and controversial ideas publishable after formal editorial (in-house) checks to preserve the authors' genuine ideas and avoid conservative amendments imposed by external peer reviewers. 32 That approach may be acceptable for established publishers with large teams of experienced editors. However, the same approach can lead to dire consequences if employed by nonselective start-up, open-access journals processing all types of articles and primarily accepting those with charged publication fees. 33 In fact, pseudoscientific ideas arguing Newton's and Einstein's seminal works or those denying climate change that are hardly testable have already found their niche in substandard electronic journals with soft or nonexistent peer review. 34

CITATIONS AND SOCIAL MEDIA ATTENTION

The available preliminary evidence points to the attractiveness of hypothesis articles for readers, particularly those from research-intensive countries who actively download related documents. 35 However, citations of such articles are disproportionately low. Only a small proportion of top-downloaded hypotheses (13%) in the highly prestigious Med Hypotheses receive on average 5 citations per article within a two-year window. 36

With the exception of a few historic papers, the vast majority of hypotheses attract relatively small number of citations in a long term. 36 Plausible explanations are that these articles often contain a single or only a few citable points and that suggested research studies to test hypotheses are rarely conducted and reported, limiting chances of citing and crediting authors of genuine research ideas.

A snapshot analysis of citation activity of hypothesis articles may reveal interest of the global scientific community towards their implications across various disciplines and countries. As a prime example, Strachan's hygiene hypothesis, published in 1989, 10 is still attracting numerous citations on Scopus, the largest bibliographic database. As of August 28, 2019, the number of the linked citations in the database is 3,201. Of the citing articles, 160 are cited at least 160 times ( h -index of this research topic = 160). The first three citations are recorded in 1992 and followed by a rapid annual increase in citation activity and a peak of 212 in 2015 ( Fig. 1 ). The top 5 sources of the citations are Clin Exp Allergy (n = 136), J Allergy Clin Immunol (n = 119), Allergy (n = 81), Pediatr Allergy Immunol (n = 69), and PLOS One (n = 44). The top 5 citing authors are leading experts in pediatrics and allergology Erika von Mutius (Munich, Germany, number of publications with the index citation = 30), Erika Isolauri (Turku, Finland, n = 27), Patrick G Holt (Subiaco, Australia, n = 25), David P. Strachan (London, UK, n = 23), and Bengt Björksten (Stockholm, Sweden, n = 22). The U.S. is the leading country in terms of citation activity with 809 related documents, followed by the UK (n = 494), Germany (n = 314), Australia (n = 211), and the Netherlands (n = 177). The largest proportion of citing documents are articles (n = 1,726, 54%), followed by reviews (n = 950, 29.7%), and book chapters (n = 213, 6.7%). The main subject areas of the citing items are medicine (n = 2,581, 51.7%), immunology and microbiology (n = 1,179, 23.6%), and biochemistry, genetics and molecular biology (n = 415, 8.3%).

An external file that holds a picture, illustration, etc.
Object name is jkms-34-e300-g001.jpg

Interestingly, a recent analysis of 111 publications related to Strachan's hygiene hypothesis, stating that the lack of exposure to infections in early life increases the risk of rhinitis, revealed a selection bias of 5,551 citations on Web of Science. 37 The articles supportive of the hypothesis were cited more than nonsupportive ones (odds ratio adjusted for study design, 2.2; 95% confidence interval, 1.6–3.1). A similar conclusion pointing to a citation bias distorting bibliometrics of hypotheses was reached by an earlier analysis of a citation network linked to the idea that β-amyloid, which is involved in the pathogenesis of Alzheimer disease, is produced by skeletal muscle of patients with inclusion body myositis. 38 The results of both studies are in line with the notion that ‘positive’ citations are more frequent in the field of biomedicine than ‘negative’ ones, and that citations to articles with proven hypotheses are too common. 39

Social media channels are playing an increasingly active role in the generation and evaluation of scientific hypotheses. In fact, publicly discussing research questions on platforms of news outlets, such as Reddit, may shape hypotheses on health-related issues of global importance, such as obesity. 40 Analyzing Twitter comments, researchers may reveal both potentially valuable ideas and unfounded claims that surround groundbreaking research ideas. 41 Social media activities, however, are unevenly distributed across different research topics, journals and countries, and these are not always objective professional reflections of the breakthroughs in science. 2 , 42

Scientific hypotheses are essential for progress in science and advances in healthcare. Innovative ideas should be based on a critical overview of related scientific facts and evidence-based data, often overlooked by others. To generate realistic hypothetical theories, the authors should comprehensively analyze the literature and suggest relevant and ethically sound design for future studies. They should also consider their hypotheses in the context of research and publication ethics norms acceptable for their target journals. The journal editors aiming to diversify their portfolio by maintaining and introducing hypotheses section are in a position to upgrade guidelines for related articles by pointing to general and specific analyses of the subject, preferred study designs to test hypotheses, and ethical implications. The latter is closely related to specifics of hypotheses. For example, editorial recommendations to outline benefits and risks of a new laboratory test or therapy may result in a more balanced article and minimize associated risks afterwards.

Not all scientific hypotheses have immediate positive effects. Some, if not most, are never tested in properly designed research studies and never cited in credible and indexed publication outlets. Hypotheses in specialized scientific fields, particularly those hardly understandable for nonexperts, lose their attractiveness for increasingly interdisciplinary audience. The authors' honest analysis of the benefits and limitations of their hypotheses and concerted efforts of all stakeholders in science communication to initiate public discussion on widely visible platforms and social media may reveal rational points and caveats of the new ideas.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Gasparyan AY, Yessirkepov M, Kitas GD.
  • Methodology: Gasparyan AY, Mukanova U, Ayvazyan L.
  • Writing - original draft: Gasparyan AY, Ayvazyan L, Yessirkepov M.
  • Writing - review & editing: Gasparyan AY, Yessirkepov M, Mukanova U, Kitas GD.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 04 September 2024

How to avoid sinking in swamp: exploring the intentions of digitally disadvantaged groups to use a new public infrastructure that combines physical and virtual spaces

  • Chengxiang Chu 1   na1 ,
  • Zhenyang Shen 1   na1 ,
  • Hanyi Xu 2   na1 ,
  • Qizhi Wei 1 &
  • Cong Cao   ORCID: orcid.org/0000-0003-4163-2218 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1135 ( 2024 ) Cite this article

Metrics details

  • Science, technology and society

With advances in digital technology, physical and virtual spaces have gradually merged. For digitally disadvantaged groups, this transformation is both convenient and potentially supportive. Previous research on public infrastructure has been limited to improvements in physical facilities, and few researchers have investigated the use of mixed physical and virtual spaces. In this study, we focused on integrated virtual and physical spaces and investigated the factors affecting digitally disadvantaged groups’ intentions to use this new infrastructure. Building on a unified theory of the acceptance and use of technology, we focused on social interaction anxiety, identified the characteristics of digitally disadvantaged groups, and constructed a research model to examine intentions to use the new infrastructure. We obtained 337 valid data from the questionnaire and analysed them using partial least squares structural equation modelling. The results showed positive relationships between performance expectancy, perceived institutional support, perceived marketplace influence, effort expectancy, and facilitating conditions. The influence of psychological reactance was significantly negative. Finally, social interaction anxiety had a regulatory effect on performance expectancy, psychological reactance, perceived marketplace influence, and effort expectancy. Its effects on perceived institutional support and facilitating conditions were not significant. The results support the creation of inclusive smart cities by ensuring that the new public infrastructure is suitable for digitally disadvantaged groups. Meanwhile, this study presents new theoretical concepts of new public infrastructures, mixed physical and virtual spaces, which provides a forward-looking approach to studying digitally disadvantaged groups in this field and paves the way for subsequent scholars to explore the field in theory and literature.

Similar content being viewed by others

hypotheses user research

The impact of small-scale green infrastructure on the affective wellbeing associated with urban sites

hypotheses user research

Economic inequalities and discontent in European cities

hypotheses user research

The appeal of cities may not wane due to the COVID-19 pandemic and remote working

Introduction.

Intelligent systems and modernisation have influenced the direction of people’s lives. With the help of continuously updated and iteratively advancing technology, modern urban construction has taken a ‘big step’ in its development. As China continues to construct smart cities, national investment in public infrastructure has steadily increased. Convenient and efficient public infrastructure has spread throughout the country, covering almost all aspects of residents’ lives and work (Guo et al. 2016 ). Previously, public infrastructure was primarily physical and located in physical spaces, but today, much of it is virtual. To achieve the goal of inclusive urban construction, the government has issued numerous relevant laws and regulations regarding public infrastructure. For example, the Chinese legislature solicited opinions from the community on the ‘Barrier-free environmental construction law of the People’s Republic of China (Draft)’.

Virtual space, based on internet technology, is a major factor in the construction of smart cities. Virtual space can be described as an interactive world built primarily on the internet (Shibusawa, 2000 ), and it has underpinned the development of national public infrastructure. In 2015, China announced its first national pilot list of smart cities, and the government began the process of building smart cities (Liu et al. 2017 ). With the continuous updating and popularisation of technologies such as the internet of things and artificial intelligence (AI) (Gu and Iop, 2020 ), virtual space is becoming widely accessible to the public. For example, in the field of government affairs, public infrastructure is now regularly developed in virtual spaces, such as on e-government platforms.

The construction of smart cities is heavily influenced by technological infrastructure (Nicolas et al. 2020 ). Currently, smart cities are being developed, and the integration of physical and virtual spaces has entered a significant stage. For example, when customers go to an offline bank to transact business, they are often asked by bank employees to use online banking software on their mobile phones, join a queue, or prove their identities. Situations such as these are neither purely virtual nor entirely physical, but in fields like banking, both options need to be considered. Therefore, we propose a new concept of mixed physical and virtual spaces in which individuals can interact, share, collaborate, coordinate with each other, and act.

Currently, new public infrastructure has emerged in mixed physical and virtual spaces, such as ‘Zheli Office’ and Alipay, in Zhejiang Province, China (as shown in Fig. 1 ). ‘Zheli Office’ is a comprehensive government application that integrates government services through digital technology, transferring some processes from offline to online and greatly improving the convenience, efficiency, and personalisation of government services. Due to its convenient payment facilities, Alipay is continuously supporting the integration of various local services, such as live payments and convenient services, and has gradually become Zhejiang’s largest living service platform. Zhejiang residents can handle almost all government and life affairs using these two applications. ‘Zheli Office’ and Alipay are key to the new public infrastructure in China, which is already leading the world in terms of a new public infrastructure that combines physical and virtual spaces; thus, China provided a valuable research context for this study.

figure 1

This figure shows the new public infrastructure has emerged in mixed physical and virtual spaces.

There is no doubt that the mixing of physical and virtual spaces is a helpful trend that makes life easier for most people. However, mixed physical and virtual spaces still have a threshold for their use, which makes it difficult for some groups to use the new public infrastructure effectively. Within society, there are people whose living conditions are restricted for physiological reasons. They may be elderly people, people with disabilities, or people who lack certain abilities. According to the results of China’s seventh (2021) national population census, there are 264.02 million elderly people aged 60 years and over in China, accounting for 18.7 per cent of the total population. China is expected to have a predominantly ageing population by around 2035. In addition, according to data released by the China Disabled Persons’ Federation, the total number of people with disabilities in China is more than 85 million, which is equivalent to one person with a disability for every 16 Chinese people. In this study, we downplay the differences between these groups, focusing only on common characteristics that hinder their use of the new public infrastructure. We collectively refer to these groups as digitally disadvantaged groups who may have difficulty adapting to the new public infrastructure integrating mixed physical and virtual spaces. This gap not only makes the new public infrastructure inconvenient for these digitally disadvantaged groups, but also leads to their exclusion and isolation from the advancing digital trend.

In the current context, in which the virtual and the real mix, digitally disadvantaged groups resemble stones in a turbulent flowing river. Although they can move forward, they do so with difficulty and will eventually be left behind. Besides facing the inherent inconveniences of new public infrastructure that integrates mixed physical and virtual spaces, digitally disadvantaged groups encounter additional obstacles. Unlike the traditional public infrastructure, the new public infrastructure requires users to log on to terminals, such as mobile phones, to engage with mixed physical and virtual spaces. However, a significant proportion of digitally disadvantaged groups cannot use the new public infrastructure effectively due to economic costs or a lack of familiarity with the technology. In addition, the use of facilities in physical and virtual mixed spaces requires engagement with numerous interactive elements, which further hinders digitally disadvantaged groups with weak social or technical skills.

The United Nations (UN) has stated the creation of ‘sustainable cities and communities’ as one of its sustainable development goals, and the construction of smart cities can help achieve this goal (Blasi et al. 2022 ). Recent studies have pointed out that the spread of COVID-19 exacerbated the marginalisation of vulnerable groups, while the lack of universal service processes and virtual facilities has created significant obstacles for digitally disadvantaged groups (Narzt et al. 2016 ; C. H. J. Wang et al. 2021 ). It should be noted that smart cities result from coordinated progress between technology and society (Al-Masri et al. 2019 ). The development of society should not be at the expense of certain people, and improving inclusiveness is key to the construction of smart cities, which should rest on people-oriented development (Ji et al. 2021 ). This paper focuses on the new public infrastructure that integrates mixed physical and virtual spaces. In it, we aim to explore how improved inclusiveness can be achieved for digitally disadvantaged groups during the construction of smart cities, and we propose the following research questions:

RQ1 . In a situation where there is a mix of physical and virtual spaces, what factors affect digitally disadvantaged groups’ use of the new public infrastructure?
RQ2 . What requirements will enable digitally disadvantaged groups to participate fully in the new public infrastructure integrating mixed physical and virtual spaces?

To answer these questions, we built a research model based on the unified theory of acceptance and use of technology (UTAUT) to explore the construction of a new public infrastructure that integrates mixed physical and virtual spaces (Venkatesh et al. 2003 ). During the research process, we focused on the attitudes, willingness, and other behavioural characteristics of digitally disadvantaged groups in relation to mixed physical and virtual spaces, aiming to ultimately provide research support for the construction of highly inclusive smart cities. Compared to existing research, this study goes further in exploring the integration and interconnection of urban public infrastructure in the process of smart city construction. We conducted empirical research to delve more deeply into the factors that influence digitally disadvantaged groups’ use of the new public infrastructure integrating mixed physical and virtual spaces. The results of this study can provide valuable guidelines and a theoretical framework for the construction of new public infrastructure and the improvement of relevant systems in mixed physical and virtual spaces. We also considered the psychological characteristics of digitally disadvantaged groups, introduced psychological reactance into the model, and used social interaction anxiety as a moderator for the model, thereby further enriching the research results regarding mixed physical and virtual spaces. This study directs social and government attention towards the issues affecting digitally disadvantaged groups in the construction of inclusive smart cities, and it has practical implications for the future digitally inclusive development of cities in China and across the world.

Theoretical background and literature review

Theoretical background of utaut.

Currently, the theories used to explore user acceptance behaviour are mainly applied separately in the online and offline fields. Theories relating to people’s offline use behaviour include the theory of planned behaviour (TPB) and the theory of reasoned action (TRA). Theories used to explore users’ online use behaviour include the technology acceptance model (TAM). Unlike previous researchers, who focused on either physical or virtual space, we focused on both. This required us to consider the characteristics of both physical and virtual spaces based on a combination of user acceptance theories (TPB, TRA, and TAM) and UTAUT, which was proposed by Venkatesh et al. ( 2003 ) in 2003. These theories have mainly been used to study the factors affecting user acceptance and the application of information technology. UTAUT integrates user acceptance theories to examine eight online and offline scenarios, thereby meeting our need for a theoretical model for this study that could include both physical and virtual spaces. UTAUT includes four key factors that directly affect users’ acceptance and usage behaviours: performance expectancy, facilitating conditions, social influence, and effort expectancy. Compared to other models, UTAUT has better interpretation and prediction capabilities for user acceptance behaviour (Venkatesh et al. 2003 ). A review of previous research showed that UTAUT has mainly been used to explore usage behaviours in online environments (Hoque and Sorwar, 2017 ) and regarding technology acceptance (Heerink et al. 2010 ). Thus, UTAUT is effective for exploring acceptance and usage behaviours. We therefore based this study on the belief that UTAUT could be applied to people’s intentions to use the new public infrastructure that integrates mixed physical and virtual spaces.

In this paper, we refine and extend UTAUT based on the characteristics of digitally disadvantaged groups, and we propose a model to explore the willingness of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces. We categorised possible influences on digitally disadvantaged groups’ use of the new public infrastructure into three areas: user factors, social factors, and technical factors. Among the user factors, we explored the willingness of digitally disadvantaged groups to use the new public infrastructure based on their performance expectancy and psychological reactance, as performance expectations are one of the UTAUT variables. To consider situations in which some users resist using new technologies due to cognitive bias, we combined (Hoque and Sorwar, 2017 ) showing that resistance among elderly people is a key factor affecting their adoption of mobile medical services with the theory of psychological reactance and introduced psychological reactance as an independent variable (Miron and Brehm, 2006 ). Among the social factors, we expanded the UTAUT social influence variable to include perceived institutional support and perceived marketplace influence. The new public infrastructure cannot be separated from the relevant government policies and the economic development status of the society in which it is constructed. Therefore, we aimed to explore the willingness of digitally disadvantaged people to use the new public infrastructure in terms of perceived institutional support and perceived marketplace influence. Among the technical factors, we explored the intentions of digitally disadvantaged groups to use new public infrastructure based on effort expectancy and facilitating conditions—both variables taken from UTAUT. In addition, considering that users with different levels of social interaction anxiety may have different levels of intention to use the new public infrastructure, we drew on research regarding the moderating role of consumer technological anxiety in adopting mobile shopping and introduced social interaction anxiety as a moderating variable (Yang and Forney, 2013 ). Believing that these modifications would further improve the interpretive ability of UTAUT, we considered it helpful to study the intentions of digitally disadvantaged groups to use the new public infrastructure.

Intentions to use mixed physical and virtual spaces

Many scholars have researched the factors that affect users’ willingness to use intelligent facilities, which can be broadly divided into two categories: for-profit and public welfare facilities. In the traditional business field, modern information technologies, such as the internet of things and AI, have become important means by which businesses can reduce costs and expand production. Even in traditional industries, such as agriculture (Kadylak and Cotten, 2020 ) and aquaculture (Cai et al. 2023 ), virtual technology now plays a significant role. Operators hope to use advanced technology to change traditional production and marketing models and to keep pace with new developments. However, mixed physical and virtual spaces should be inclusive for all people. Already, technological development is making it clear that no one will be able to entirely avoid mixed physical and virtual spaces. The virtualisation of public welfare facilities has gradually emerged in many areas of daily life, such as electronic health (D. D. Lee et al. 2019 ) and telemedicine (Werner and Karnieli, 2003 ). Government affairs are increasingly managed jointly in both physical and virtual spaces, resulting in an increase in e-government research (Ahn and Chen, 2022 ).

A review of the literature over the past decade showed that users’ willingness to use both for-profit and public welfare facilities is influenced by three sets of factors: user factors, social factors, and technical factors. First, regarding user factors, Bélanger and Carter ( 2008 ) pointed out that consumer trust in the government and technology are key factors affecting people’s intentions to use technology. Research on older people has shown that self-perceived ageing can have a significant impact on emotional attachment and willingness to use technology (B. A. Wang et al. 2021 ). Second, social factors include consumers’ intentions to use, which may vary significantly in different market contexts (Chiu and Hofer, 2015 ). For example, research has shown that people’s willingness to use digital healthcare tools is influenced by the attitudes of the healthcare professionals they encounter (Thapa et al. 2021 ). Third, technical factors include appropriate technical designs that help consumers use facilities more easily. Yadav et al. ( 2019 ) considered technical factors, such as ease of use, quality of service provided, and efficiency parameters, in their experiments.

The rapid development of virtual technology has inevitably drawn attention away from the physical world. Most previous researchers have focused on either virtual or physical spaces. However, scholars have noted the increasing mixing of these two spaces and have begun to study the relationships between them (Aslesen et al. 2019 ; Cocciolo, 2010 ). Wang ( 2007 ) proposed enhancing virtual environments by inserting real entities. Existing research has shown that physical and virtual spaces have begun to permeate each other in both economic and public spheres, blurring the boundaries between them (K. F. Chen et al. 2024 ; Paköz et al. 2022 ). Jakonen ( 2024 ) pointed out that, currently, with the integration of digital technologies into city building, the role of urban space in various stakeholders’ lives needs to be fully considered. The intermingling of physical and virtual spaces began to occur in people’s daily work (J. Chen et al. 2024 ) during the COVID-19 pandemic, which enhanced the integration trend (Yeung and Hao, 2024 ). The intermingling of virtual and physical spaces is a sign of social progress, but it is a considerable challenge for digitally disadvantaged people. For example, people with disabilities experience infrastructure, access, regulatory, communication, and legislative barriers when using telehealth services (Annaswamy et al. 2020 ). However, from an overall perspective, few relevant studies have considered the mixing of virtual and physical spaces.

People who are familiar with information technology, especially Generation Z, generally consider the integration of physical and virtual spaces convenient. However, for digitally disadvantaged groups, such ‘science fiction’-type changes can be disorientating and may undermine their quality of life. The elderly are an important group among the digitally disadvantaged groups referred to in this paper, and they have been the primary target of previous research on issues of inclusivity. Many researchers have considered the factors influencing older people’s willingness to use emerging technologies. For example, for the elderly, ease of use is often a prerequisite for enjoyment (Dogruel et al. 2015 ). Iancu and Iancu ( 2020 ) explored the interaction of elderly with technology, with a particular focus on mobile device design. The study emphasised that elderly people’s difficulties with technology stem from usability issues that can be addressed through improved design and appropriate training (Iancu and Iancu, 2020 ). Moreover, people with disabilities are an important group among digitally disadvantaged groups and an essential concern for the inclusive construction of cities. The rapid development of emerging technologies offers convenience to people with disabilities and has spawned many physical accessibility facilities and electronic accessibility systems (Botelho, 2021 ; Perez et al. 2023 ). Ease of use, convenience, and affordability are also key elements for enabling disadvantaged groups to use these facilities (Mogaji et al. 2023 ; Mogaji and Nguyen, 2021 ). Zander et al. ( 2023 ) explored the facilitators of and barriers to the implementation of welfare technologies for elderly people and people with disabilities. Factors such as abilities, attitudes, values, and lifestyles must be considered when planning the implementation of welfare technology for older people and people with disabilities (Zander et al. 2023 ).

In summary, scholars have conducted extensive research on the factors influencing intentions to use virtual facilities. These studies have revealed the underlying logic behind people’s adoption of virtual technology and have laid the foundations for the construction of inclusive new public infrastructure. Moreover, scholars have proposed solutions to the problems experienced by digitally disadvantaged groups in adapting to virtual facilities, but most of these scholars have focused on the elderly. Furthermore, scholars have recently conducted preliminary explorations of the mixing of physical and virtual spaces. These studies provided insights for this study, enabling us to identify both relevant background factors and current developments in the integration of virtual spaces with reality. However, most researchers have viewed the development of technology from the perspective of either virtual space or physical space, and they have rarely explored technology from the perspective of mixed physical and virtual spaces. In addition, when focusing on designs for the inclusion of digitally disadvantaged groups, scholars have mainly provided suggestions for specific practices, such as improvements in technology, hardware facilities, or device interaction interfaces, while little consideration has been given to the psychological characteristics of digitally disadvantaged groups or to the overall impact of society on these groups. Finally, in studying inclusive modernisation, researchers have generally focused on the elderly or people with disabilities, with less exploration of behavioural differences caused by factors such as social anxiety. Therefore, based on UTAUT, we explored the willingness of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces in a Chinese context (as shown in Fig. 2 ).

figure 2

This figure explores the willingness of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces in a Chinese context.

Research hypotheses

User factors.

Performance expectancy is defined as the degree to which an individual believes that using a system will help him or her achieve gains in job performance (Chao, 2019 ; Venkatesh et al. 2003 ). In this paper, performance expectancy refers to the extent to which digitally disadvantaged groups obtain tangible results from the use of the new public infrastructure. Since individuals have a strong desire to improve their work performance, they have strong intentions to use systems that can improve that performance. Previous studies in various fields have confirmed the view that high performance expectancy can effectively promote individuals’ sustained intentions to use technology (Abbad, 2021 ; Chou et al. 2010 ; S. W. Lee et al. 2019 ). For example, the role of performance expectancy was verified in a study on intentions to use e-government (Zeebaree et al. 2022 ). We believe that if digitally disadvantaged groups have confidence that the new public infrastructure will help them improve their lives or work performance, even in complex environments, such as mixed physical and virtual spaces, they will have a greater willingness to use it. Therefore, we developed the following hypothesis:

H1: Performance expectancy has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Brehm ( 1966 ) proposed the psychological reactance theory in 1966. According to this theory, when individuals perceive that their freedom to make their own choices is under threat, a motivational state to restore that freedom is awakened (Miron and Brehm, 2006 ). Psychological reactance manifests in an individual’s intentional or unintentional resistance to external factors. Previous studies have shown that when individuals are in the process of using systems or receiving information, they may have cognitive biases that lead to erroneous interpretations of the external environment, resulting in psychological reactance (Roubroeks et al. 2010 ). Surprisingly, cognitive biases may prompt individuals to experience psychological reactance, even when offered support with helpful intentions (Tian et al. 2020 ). In this paper, we define psychological resistance as the cognitive-level or psychological-level obstacles or resistance of digitally disadvantaged groups to the new public infrastructure. This resistance may be due to digitally disadvantaged groups misunderstanding the purpose or use of the new public infrastructure. For example, they may think that the new public infrastructure will harm their self-respect or personal interests. When digitally disadvantaged groups view the new public infrastructure as a threat to their status or freedom to make their own decisions, they may develop resistance to its use. Therefore, psychological reactance cannot be ignored as an important factor potentially affecting digitally disadvantaged groups’ intentions to use the new public infrastructure. Hence, we developed the following hypothesis:

H2: Psychological reactance has a negative impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Social factors

In many countries, the main providers of public infrastructure are government and public institutions (Susilawati et al. 2010 ). Government decision-making is generally based on laws or government regulations (Acharya et al. 2022 ). Government decision-making procedures affect not only the builders of infrastructure, but also the intentions of users. In life, individuals and social organisations tend to abide by and maintain social norms to ensure that their behaviours are socially attractive and acceptable (Bygrave and Minniti, 2000 ; Martins et al. 2019 ). For example, national financial policies influence the marketing effectiveness of enterprises (Chen et al. 2021 ). Therefore, we believe that perceived institutional support is a key element influencing the intentions of digitally disadvantaged groups to use the new public infrastructure. In this paper, perceived institutional support refers to digitally disadvantaged groups’ perceived policy state or government support for using the new public infrastructure, including institutional norms, laws, and regulations. Existing institutions have mainly been designed around public infrastructure that exists in physical space. We hope to explore whether perceived institutional support for digitally disadvantaged groups affects their intentions to use the new public infrastructure that integrates mixed physical and virtual spaces. Thus, we formulated the following hypothesis:

H3: Perceived institutional support has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Perceived marketplace influence is defined as actions or decisions that affect the market behaviour of consumers and organisations (Joshi et al. 2021 ; Leary et al. 2014 ). In this paper, perceived marketplace influence is defined as the behaviour of others using the new public infrastructure that affects the intentions of digitally disadvantaged groups to use it. Perceived marketplace influence increases consumers’ perceptions of market dynamics and their sense of control through the influence of other participants in the marketplace (Leary et al. 2019 ). Scholars have explored the impact of perceived marketplace influence on consumers’ purchase and use intentions in relation to fair trade and charity (Leary et al. 2019 ; Schneider and Leonard, 2022 ). Schneider and Leonard ( 2022 ) claimed that if consumers believe that their mask-wearing behaviour will motivate others around them to follow suit, then this belief will in turn motivate them to wear masks. Similarly, when digitally disadvantaged people see the people around them using the new public infrastructure, this creates an invisible market that influences their ability and motivation to try using the infrastructure themselves. Therefore, we developed the following hypotheses:

H4: Perceived marketplace influence has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Technical factors

Venkatesh et al. ( 2003 ) defined effort expectancy as the ease with which individuals can use a system. According to Tam et al. ( 2020 ), effort expectancy positively affects individuals’ performance expectancy and their sustained intentions to use mobile applications. In this paper, effort expectancy refers to the ease of use of the new public infrastructure for digitally disadvantaged groups: the higher the level of innovation and the more steps involved in using a facility, the poorer the user experience and the lower the utilisation rate (Venkatesh and Brown, 2001 ). A study on the use of AI devices for service delivery noted that the higher the level of anthropomorphism, the higher the cost of effort required by the customer to use a humanoid AI device (Gursoy et al. 2019 ). In mixed physical and virtual spaces, the design and use of new public infrastructure may become increasingly complex, negatively affecting the lives of digitally disadvantaged groups. We believe that the simpler the new public infrastructure, the more it will attract digitally disadvantaged groups to use it, while also enhancing their intentions to use it. Therefore, we formulated the following hypothesis:

H5: Effort expectancy has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Venkatesh et al. ( 2003 ) defined facilitating conditions as the degree to which an individual believes that an organisation and its technical infrastructure exist to support the use of a system. In this paper, facilitating conditions refer to the external conditions that support digitally disadvantaged groups in using the new public infrastructure, including resources, knowledge bases, skills, etc. According to Zhong et al. ( 2021 ), facilitating conditions can affect users’ attitudes towards the use of face recognition payment systems and, further, affect their intentions to use them. Moreover, scholars have shown that facilitating conditions significantly promote people’s intentions to use e-learning systems and e-government (Abbad, 2021 ; Purohit et al. 2022 ). Currently, the new public infrastructure involves mixed physical and virtual spaces, and external facilitating conditions, such as a ‘knowledge salon’ or a training session, can significantly promote digitally disadvantaged groups’ intentions and willingness to the infrastructure. Therefore, we developed the following hypothesis:

H6: Facilitating conditions have a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating a mixed physical and virtual spaces.

Moderator variable

Magee et al. ( 1996 ) claimed that social interaction anxiety is an uncomfortable emotion that some people experience in social situations, leading to avoidance, a desire for solitude, and a fear of criticism. In this paper, social interaction anxiety refers to the worries and fears of digitally disadvantaged groups about the social interactions they will be exposed to when using the new public infrastructure. Research has confirmed that people with high levels of dissatisfaction with their own bodies are more anxious in social situations (Li Mo and Bai, 2023 ). Moreover, people with high degrees of social interaction anxiety may feel uncomfortable in front of strangers or when observed by others (Zhu and Deng, 2021 ). Digitally disadvantaged groups usually have some physiological inadequacies and may be rejected by ‘normal’ groups. Previous studies have shown that the pain caused by social exclusion is positively correlated with anxiety (Davidson et al. 2019 ). Digitally disadvantaged groups may have higher degrees of dissatisfaction with their own physical abilities, which may exacerbate any social interaction anxiety they already have. We believe that high social interaction anxiety is a common characteristic of digitally disadvantaged groups, defining them as ‘different’ from other groups.

In mixed physical and virtual spaces, if the design of the new public infrastructure is not friendly and does not help digitally disadvantaged groups use it easily, their perceived social exclusion is likely to increase, resulting in a heightened sense of anxiety. However, compared with face-to-face and offline social communication, online platforms offer convenience in terms of both communication method and duration (Ali et al. 2020 ). Therefore, people with a high degree of social interaction anxiety frequently prefer and are likely to choose online social communication (Hutchins et al. 2021 ). However, digitally disadvantaged groups may be unable to avoid social interaction by using the facilities offered in virtual spaces. Therefore, we believe that influencing factors may have different effects on intentions to use the new public infrastructure, according to the different levels of social interaction anxiety experienced. Therefore, we predicted the following:

H7: Social interaction anxiety has a moderating effect on each path.

Research methodology

Research background and cases.

To better demonstrate the phenomenon of the new public infrastructure integrating mixed physical and virtual spaces, we considered the cases of ‘Zheli Office’ (as shown in Fig. 3 ) and Alipay (as shown in Fig. 4 ) to explain the two areas of government affairs and daily life affairs, which greatly affect the daily lives of residents. Examining the functions of ‘Zheli Office’ and Alipay in mixed physical and virtual spaces allowed us to provide examples of the new public infrastructure integrating mixed physical and virtual spaces.

figure 3

This figure shows the ‘Zheli Office’, it is a comprehensive government application that integrates government services through digital technology, transferring some processes from offline to online and greatly improving the convenience, efficiency, and personalisation of government services.

figure 4

This figure shows Alipay, it supports the integration of various local services, such as live payments and convenient services, and has gradually become Zhejiang’s largest living service platform.

‘Zheli Office’ provides Zhejiang residents with a channel to handle their tax affairs. Residents who need to manage their tax affairs can choose the corresponding tax department through ‘Zheli Office’ and schedule the date and time for offline processing. Residents can also upload tax-related materials directly to ‘Zheli Office’ to submit them to the tax department for preapproval. Residents only need to present the vouchers generated by ‘Zheli Office’ to the tax department at the scheduled time to manage tax affairs and undergo final review. By mitigating long waiting times and tedious tax material review steps through the transfer of processes from physical spaces to virtual spaces, ‘Zheli Office’ greatly optimises the tax declaration process and saves residents time and effort in tax declaration.

Alipay provides residents with a channel to rent shared bicycles. Residents who want to rent bicycles can enter their personal information on Alipay in advance and provide a guarantee (an Alipay credit score or deposit payment). When renting a shared bicycle offline, residents only need to scan the QR code on the bike through Alipay to unlock and use it. When returning the bike, residents can also click the return button to automatically lock the bike and pay the fee anytime and anywhere. By automating leasing procedures and fee settlement in virtual spaces, Alipay avoids the tedious operations that residents experience when renting bicycles in physical stores.

Through the preceding two examples, we demonstrate the specific performance of the integration of virtual spaces and physical spaces. The government/life affairs of residents, such as tax declarations, certificate processing, transportation, shopping, and various other affairs, all require public infrastructure support. With the emergence of new digital trends in residents’ daily lives, mixed physical and virtual spaces have produced a public infrastructure that can support residents’ daily activities in mixed physical and virtual spaces. Due to the essential differences between public infrastructure involving mixed physical and virtual spaces and traditional physical and virtual public infrastructures, we propose a new concept—new public infrastructure. This is defined as ‘a public infrastructure that supports residents in conducting daily activities in mixed physical and virtual spaces’. It is worth noting that the new public infrastructure may encompass not only the virtual spaces provided by digital applications but also the physical spaces provided by machines capable of receiving digital messages, such as smart screens, scanners, and so forth.

The UN Sustainable Development Goal Report highlights that human society needs to build sustainable cities and communities that do not sacrifice the equality of some people. Digitally disadvantaged groups should not be excluded from the sustainable development of cities due to the increasing digitalisation trend because everyone should enjoy the convenience of the new public infrastructure provided by cities. Hence, ensuring that digitally disadvantaged groups can easily and comfortably use the new public infrastructure will help promote the construction of smart cities, making them more inclusive and universal. It will also promote the development of smart cities in a more equal and sustainable direction, ensuring that everyone can enjoy the benefits of urban development. Therefore, in this article, we emphasise the importance of digitally disadvantaged groups in the construction of sustainable smart cities. Through their participation and feedback, we can build more inclusive and sustainable smart cities in the future.

Research design

The aim of this paper was to explore the specific factors that influence the intentions of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces, and to provide a rational explanation for the role of each factor. To achieve this goal, we first reviewed numerous relevant academic papers. This formed the basis of our research assumptions and helped determine the measurement items we included. Second, we collected data through a questionnaire survey and then analysed the data using partial least squares structural equation modelling (PLS-SEM) to explore the influence of the different factors on digitally disadvantaged groups’ intentions to use the new public infrastructure. Finally, we considered in depth the mechanisms by which the various factors influenced digitally disadvantaged groups’ intentions to use mixed physical and virtual spaces.

We distributed a structured questionnaire to collect data for the study. To ensure the reliability and validity of the questionnaire, we based the item development on the scales used in previous studies (as shown in Appendix A). The first part of the questionnaire concerned the participants’ intentions to use the new public infrastructure. Responses to this part of the questionnaire were given on a seven-point Likert scale to measure the participants’ agreement or disagreement with various statements, with 1 indicating ‘strong disagreement’ and 7 indicating ‘strong agreement’. In addition, we designed cumulative scoring questions to measure the participants’ social interaction anxiety according to Fergus’s Social Interaction Anxiety Scale (Fergus et al. 2012 ). The second part of the questionnaire concerned the demographic characteristics of the participants, including but not limited to gender, age, and education level. Participants were informed that completing the survey was voluntary and that they had the right to refuse or withdraw at any time. They were informed that the researchers would not collect any personal information that would make it possible to identify them. Only after we had obtained the participants’ consent did we commence the questionnaire survey and data collection. Since the new public infrastructure referred to in this study was quite abstract, it was not conducive to the understanding and perceptions of digitally disadvantaged groups. Therefore, to better enable the respondents to understand our concept of the new public infrastructure, we simplified it to ‘an accessible infrastructure’ and informed them about typical cases and the relevant context of this study before they began to complete the questionnaire.

Once the questionnaire design was finalised, we conducted a pretest to ensure that the questions met the basic requirements of reliability and validity and that the participants could accurately understand the questions. In the formal questionnaire survey stage, we distributed the online questionnaire to digitally disadvantaged groups based on the principle of simple random sampling and collected data through the Questionnaire Star platform. Our sampling principle was based on the following points: first, the respondents had to belong to digitally disadvantaged groups and have experienced digital divide problems; second, they had to own at least one smart device and have access to the new public infrastructure, such as via ‘Zheli Office’ or Alipay, and third, they must have used government or daily life services on ‘Zheli Office’ or Alipay at least once in the past three months. After eliminating any invalid questionnaires, 337 valid completed questionnaires remained. The demographic characteristics of the participants are shown in Table 1 . In terms of gender, 54.30% of the participants were male, and 45.70% were female. In terms of age, 64.09% of the participants were aged 18–45 years. In terms of social interaction anxiety, the data showed that 46.59% of the participants had low social interaction anxiety, and 53.41% had high social interaction anxiety.

Data analysis

PLS-SEM imposes few restrictions on the measurement scale, sample size, and residual distribution (Ringle et al. 2012 ). However, the environment in which the research object was located was relatively new, so we added two special variables—psychological reactance and perceived institutional support—to the model. The PLS-SEM model was considered suitable for conducting exploratory research on the newly constructed theory and research framework. Building on previous experience, the data analysis was divided into two stages: 1) the measurement model was used to evaluate the reliability and validity of the experiment, and 2) the structural model was used to test the study hypotheses by examining the relationships between the variables.

Measurement model

First, we tested the reliability of the model by evaluating the reliability of the constructs. As shown in Table 2 , the Cronbach’s alpha (CA) range for this study was 0.858–0.901, so both extremes were higher than the acceptable threshold (Jöreskog, 1971 ). The composite reliability (CR) scores ranged from 0.904 to 0.931; therefore, both extremes were above the threshold of 0.7 (Bagozzi and Phillips, 1982 ) (see Table 2 ).

We then assessed the validity. The test for structural validity included convergent validity and discriminant validity. Convergent validity was mainly verified by the average variance extracted (AVE) value. The recommended value for AVE is 0.5 (Kim and Park, 2013 ). In this study, the AVE values for all structures far exceeded this value (the minimum AVE value was 0.702; see Table 2 ). This result showed that the structure of this model was reliable. The Fornell–Larcker criterion is commonly used to evaluate discriminant validity; that is, the square root of the AVE should be far larger than the correlations for other constructs, meaning that each construct best explains the variance of its own construct (Hair et al. 2014 ), as shown in Table 3 . The validity of the measurement model was further evaluated by calculating the cross-loading values of the reflection construct. It can clearly be seen from Table 4 that compared with other constructs included in the structural model, the indicators of the reflection metric model had the highest loading on their potential constructs (Hair et al. 2022 ), indicating that all inspection results met the evaluation criterion for cross-loading.

In addition, we used the heterotrait-monotrait (HTMT) ratio of correlations to analyse discriminant validity (Henseler et al. 2015 ). Generally, an HTMT value greater than 0.85 indicates that there are potential discriminant validity risks (Hair et al. 2022 ), but Table 5 shows that the HTMT ratios of the correlations in this study were all lower than this value (the maximum value was 0.844).

Structural model

Figure 5 presents the evaluation results for the structural model for the whole sample. The R 2 value for the structural model in this study was 0.740; that is, the explanatory power of the model regarding intention to use was 74.00%. The first step was to ensure that there was no significant collinearity between the predicted value structures, otherwise there would be redundancy in the analysis (Hair et al. 2019 ). All VIF values in this study were between 1.743 and 2.869 and were therefore lower than the 3.3 threshold value for the collinearity test (Hair et al. 2022 ), which proved that the path coefficient had not deviated. This also proves that the model had a low probability of common method bias.

figure 5

This figure shows the evaluation results for the structural model.

As shown in Fig. 5 , performance expectation ( β  = 0.505, p  < 0.001), perceived institutional support ( β  = 0.338, p  < 0.001), perceived marketplace influence ( β  = 0.190, p  < 0.001), effort expectation ( β  = 0.176, p  < 0.001) and facilitating conditions ( β  = 0.108, p  < 0.001) all had significant and positive effects on intention to use. Moreover, the results showed that the relationship between psychological reaction ( β  = −0.271, p  < 0.001) and intention to use was negative and significant. Therefore, all the paths in this paper, except for the moderator variables, have been verified.

Multi-group analysis

To study the moderating effect between the independent variables and the dependent variables, Henseler et al. ( 2009 ) recommended using a multigroup analysis (MGA). In this study, we used MGA to analyse the moderating effect of different levels of social interaction anxiety. We designed six items for social interaction anxiety (as shown in Appendix A). According to the subjects’ responses to these six items and based on the principle of accumulation, questionnaires with scores of 6–20 indicated low social interaction anxiety, while questionnaires with scores of 28–42 indicated high social interaction anxiety. Questionnaires with scores of 21–27 were considered neutral and eliminated from the analysis involving social interaction anxiety. Based on multigroup validation factor analysis, we determined the component invariance, the configurable invariance, and the equality between compound variance and mean (Hair et al. 2019 ). As shown in Formula 1 , we used an independent sample t -test as a significance test, and a p -value below 0.05 indicated the significance of the parameters.

As shown in Table 6 , under social factors, the p -value for perceived institutional support in relation to intention to use was 0.335, which failed the significance test. This showed that there were no differences between the different degrees of social interaction anxiety. For technical factors, the p -value for facilitating conditions in relation to intention to use was 0.054, which again failed the test. This showed that there were no differences between the different levels of social interaction anxiety. However, the p -values for performance expectancy, psychological reaction, perceived marketplace influence, and effort expectancy in relation to intention to use were all less than 0.05; therefore, they passed the test for significance. This revealed that different degrees of social interaction anxiety had significant effects on these factors and that social interaction anxiety moderated some of the independent variables.

Next, we considered the path coefficients and p- values for the high and low social anxiety groups, as shown in Table 6 . First, with different levels of social anxiety, performance expectation had significantly different effects on intention to use, with low social anxiety ( β  = −0.129, p  = 0.394) failing the test and high social anxiety ( β  = 0.202, p  = 0.004) passing the test. This shows that high social anxiety levels had a greater influence of performance expectations on intention to use than low social anxiety levels. Second, psychological reactance showed significant differences in its effect on intention to use under different degrees of social anxiety, with low social anxiety ( β  = 0.184, p  = 0.065) failing the test and high social anxiety ( β  = −0.466, p  = 0.000) passing the test. Third, with different levels of social anxiety, perceived marketplace influence had significantly different effects on intention to use. Of these, perceived marketplace influence had a significant effect with low social anxiety levels ( β  = 0.312, p  = 0.001) but not with high social anxiety levels ( β  = 0.085, p  = 0.189). Finally, with differing degrees of social anxiety, expected effort had significantly different effects on intention to use. Of these, expected effort was insignificant at a low social anxiety level ( β  = −0.058, p  = 0.488), but it was significant at a high social anxiety level ( β  = 0.326, p  = 0.000). Therefore, different degrees of social interaction anxiety had significantly different effects on performance expectation, psychological reactance, perceived marketplace influence, and effort expectation.

Compared with previous studies, this study constituted a preliminary but groundbreaking exploration of mixed physical and virtual spaces. Moreover, we focused on the inclusivity problems encountered by digitally disadvantaged groups in these mixed physical and virtual spaces. We focused on performance expectancy, psychological reactance, perceived institutional support, perceived marketplace influence, effort expectancy, and facilitating conditions as the six factors, with intention to use being the measure of the perceived value of the new public infrastructure. However, digitally disadvantaged groups, depending on their own characteristics or social influences, can provoke different responses from the general population in their social interactions. Therefore, we added social interaction anxiety to the model as a moderating variable, in line with the assumed psychological characteristics of digitally disadvantaged groups. The empirical results revealed a strong correlation between influencing factors and intention to use. This shows that this model has good applicability for mixed physical and virtual spaces.

According to the empirical results, performance expectancy has a significant and positive impact on intention to use, suggesting that the mixing of the virtual and the real will create usage issues and cognitive difficulties for digitally disadvantaged groups. However, if the new public infrastructure can capitalise on the advantages of blended virtual and physical spaces, it could help users build confidence in its use, which would improve their intentions to use it. Furthermore, users’ intentions to use and high social interaction anxiety are likely to be promoted by performance expectancy. In most cases, social interaction anxiety stems from self-generated avoidance, isolation, and fear of criticism (Schultz and Heimberg, 2008 ). This may result in highly anxious digitally disadvantaged groups being reluctant to engage with others when using public facilities (Mulvale et al. 2019 ; Schou and Pors, 2019 ). However, the new public infrastructure is often unattended, which could be an advantage for users with high social anxiety. Therefore, the effect of performance expectancy in promoting intentions to use would be more significant in this group.

We also found that the psychological reactance of digitally disadvantaged groups had a reverse impact on their intentions to use technology in mixed physical and virtual spaces. However, social interaction anxiety had a moderating effect on this, such that the negative effect of psychological reactance on intention to use the new public infrastructure was more pronounced in the group with high social interaction anxiety. Facilities involving social or interactive factors may make users with high social interaction anxiety think that their autonomy is, to some extent, being violated, thus triggering subconscious resistance. The communication anxiety of digitally disadvantaged groups stems not only from the new public infrastructure itself but also from the environment in which it is used (Fang et al. 2019 ). Complex, mixed physical and virtual spaces can disrupt the habits that digitally disadvantaged groups have developed in purely physical spaces, resulting in greater anxiety (Hu et al. 2022 ), while groups with high levels of social anxiety tend to remain independent because they prefer to maintain their independence. Therefore, a high degree of social interaction anxiety will induce psychological reactance towards using the new public infrastructure.

The results of this paper shed further light on the role of social factors. In particular, the relationship between perceived institutional support and intention to use reflects the fact that perceived institutional support plays a role in promoting digitally disadvantaged groups’ intentions to use the new public infrastructure. This indicates that promotion measures need to be introduced by the government and public institutions if digitally disadvantaged groups are to accept the new public infrastructure. The development of a new public infrastructure integrating mixed physical and virtual spaces requires a high level of involvement from government institutions to facilitate the inclusive development of sustainable smart cities (Khan et al. 2020 ). An interesting finding of this study was that there were no significant differences between the effects of either high or low levels of social interaction anxiety on perceived institutional support and intention to use. This may be because social interaction anxiety mainly occurs in individuals within their close microenvironments. The policies and institutional norms of perceived institutional support tend to act at the macro level (Chen and Zhang, 2021 ; Mora et al. 2023 ), so levels of social interaction anxiety do not differ insignificantly between perceived institutional support and intentions to use the new public infrastructure.

We also found that digitally disadvantaged groups with low social interaction anxiety were more influenced by perceived marketplace influence. Consequently, they were more willing to use the new public infrastructure. When the market trend is to aggressively build a new public infrastructure, companies will accelerate their infrastructure upgrades to keep up with the trend (Hu et al. 2023 ; Liu and Zhao, 2022 ). Companies are increasingly incorporating virtual objects into familiar areas, forcing users to embrace mixed physical and virtual spaces. In addition, it is inevitable that digitally disadvantaged groups will have to use the new public infrastructure due to the market influence of people around them using this infrastructure to manage their government or life issues. When digitally disadvantaged groups with low levels of social interaction anxiety use the new public infrastructure, they are less likely to feel fearful and excluded (Kaihlanen et al. 2022 ) and will tend to be positively influenced by the use behaviours of others to use the new public infrastructure themselves (Troisi et al. 2022 ). The opposite is true for groups with high social interaction anxiety, which leads to significant differences in perceived marketplace influence and intentions to use among digitally disadvantaged groups with different levels of social interaction anxiety.

Existing mixed physical and virtual spaces exhibit exceptional technical complexity, and the results of this study affirm the importance of technical factors in affecting intentions to use. In this paper, we emphasised effort expectancy as the ease of use of the new public infrastructure (Venkatesh et al. 2003 ), which had a significant effect on digitally disadvantaged groups with high levels of social interaction anxiety but no significant effect on those with low levels of social interaction anxiety. Digitally disadvantaged groups with high levels of social interaction anxiety are likely to have a stronger sense of rejection due to environmental pressures if the new public infrastructure is too cumbersome to run or operate; they may therefore prefer using simple facilities and services. Numerous scholars have proven in educational (Hu et al. 2022 ), medical (Bai and Guo, 2022 ), business (Susanto et al. 2018 ), and other fields that good product design promotes users’ intentions to use technology (Chen et al. 2023 ). For digitally disadvantaged groups, accessible and inclusive product designs can more effectively incentivise their intentions to use the new public infrastructure (Hsu and Peng, 2022 ).

Facilitating conditions are technical factors that represent facility-related support services. The study results showed a significant positive effect of facilitating conditions on intention to use. This result is consistent with the results of previous studies regarding physical space. Professional consultation (Vinnikova et al. 2020 ) and training (Yang et al. 2023 ) on products in conventional fields can enhance users’ confidence, which can then be translated into intentions to use (Saparudin et al. 2020 ). Although the form of the new public infrastructure has changed in the direction of integration, its target object is still the user in physical space. Therefore, better facilitating conditions can enhance users’ sense of trust and promote their intentions to use (Alalwan et al. 2017 ; Mogaji et al. 2021 ). Concerning integration, because the new public infrastructure can assume multiple forms, it is difficult for digitally disadvantaged groups to know whether a particular infrastructure has good facilitating conditions. It is precisely such uncertainties that cause users with high social interaction anxiety to worry that they will be unable to use the facilities effectively. They may then worry that they will be burdened by scrutiny from strangers, causing resistance. Even when good facilitating conditions exist, groups with high social interaction anxiety do not necessarily intend to use them. Therefore, there were no significant differences between the different levels of social interaction anxiety in terms of facilitating conditions and intention to use them.

Theoretical value

In this study, we mainly examined the factors influencing digitally disadvantaged groups’ intentions to use the new public infrastructure consisting of mixed physical and virtual spaces. The empirical results of this paper make theoretical contributions to the inclusive construction of mixed spaces in several areas.

First, based on an understanding of urban development involving a deep integration of physical space with virtual space, we contextualise virtual space within the parameters of public infrastructure to shape the concept of a new public infrastructure. At the same time, by including the service system, the virtual community, and other non-physical factors in the realm where the virtual and the real are integrated, we form a concept of mixed physical and virtual spaces, which expands the scope of research related to virtual and physical spaces and provides new ideas for relevant future research.

Second, this paper makes a preliminary investigation of inclusion in the construction of the new public infrastructure and innovatively examines the factors that affect digitally disadvantaged groups’ willingness to use the mixed infrastructure, considering them in terms of individual, social, and technical factors. Moreover, holding that social interaction anxiety is consistent with the psychological characteristics of digitally disadvantaged groups, we introduce social interaction anxiety into the research field and distinguish between the performance of subjects with high social interaction anxiety and the performance of those with low social interaction anxiety. From the perspective of digitally disadvantaged groups, this shows the regulatory effect of social interaction anxiety on users’ psychology and behaviours. These preliminary findings may lead to greater attention being paid to digitally disadvantaged groups and prompt more studies on inclusion.

In addition, while conducting background research, we visited public welfare organisations and viewed government service lists to obtain first-hand information about digitally disadvantaged groups. Through our paper, we encourage the academic community to pay greater attention to theoretical research on digitally disadvantaged groups in the hope that deepening and broadening such research will promote the inclusion of digitally disadvantaged groups in the design of public infrastructure.

Practical value

Based on a large quantity of empirical research data, we explored the digital integration factors that affect users’ intentions to use the new public infrastructure. To some extent, this provides new ideas and development directions for inclusive smart city construction. Inclusion in existing cities mainly concerns the improvement of specific technologies, but the results of this study show that technological factors are only part of the picture. The government should introduce relevant policies to promptly adapt the new public infrastructure to digitally disadvantaged groups, and the legislature should enact appropriate laws. In addition, the study results can guide the design of mixed physical and virtual spaces for the new public infrastructure. Enterprises can refer to the results of this study to identify inconveniences in their existing facilities, optimise their service processes, and improve the inclusiveness of urban institutions. Furthermore, attention should be paid to the moderating role of social interaction anxiety in the process. Inclusive urban construction should not only be physical but should closely consider the inner workings of digitally disadvantaged groups. The government and enterprises should consider the specific requirements of people with high social interaction anxiety, such as by simplifying the enquiry processes in their facilities or inserting psychological comfort measures into the processes.

Limitations and future research

Due to resource and time limitations, this paper has some shortcomings. First, we considered a broad range of digitally disadvantaged groups and conducted a forward-looking exploratory study. Since we collected data through an online questionnaire, there were restrictions on the range of volunteers who responded. Only if participants met at least one of the conditions could they be identified as members of digitally disadvantaged groups and participate in a follow-up survey. To reduce the participants’ introspection and painful recollections of their disabilities or related conditions, and to avoid expected deviations from the data obtained through the survey, we made no detailed distinction between the participants’ degrees of impairment or the reasons for impairment. We adopted a twofold experimental approach.: first, a questionnaire that was too detailed might have infringed on the participants’ privacy rights, and second, since little research has been conducted on inclusiveness in relation to mixed physical and virtual spaces, this work was pioneering. Therefore, we paid greater attention to digitally disadvantaged groups’ intentions to use the new public infrastructure. In future research, we could focus on digitally disadvantaged individuals who exhibit the same deficiencies, or further increase the sample size to investigate the participants’ intentions to use the new public infrastructure in more detail.

Second, different countries have different economic development statuses and numbers of digitally disadvantaged groups. Our study mainly concerned the willingness of digitally disadvantaged groups to use the new public infrastructure in China. Therefore, in the future, the intentions of digitally disadvantaged groups to use new public infrastructures involving mixed physical and virtual spaces can be further explored in different national contexts. Furthermore, in addition to the effects of social interaction anxiety examined in this paper, future researchers could consider other moderators associated with individual differences, such as age, familiarity with technology, and disability status. We also call for more scholars to explore digitally disadvantaged groups’ use of the new public infrastructure to promote inclusive smart city construction and sustainable social development.

Previous researchers have explored users’ intentions to use virtual technology services and have analysed the factors that influence those intentions (Akdim et al. 2022 ; Liébana-Cabanillas et al. 2020 ; Nguyen and Dao, 2024 ). However, researchers have mainly focused on single virtual or physical spaces (Scavarelli et al. 2021 ; Zhang et al. 2020 ), and the topic has rarely been discussed in relation to mixed physical and virtual spaces. In addition, previous studies have mainly considered the technology perspective (Buckingham et al. 2022 ; Carney and Kandt, 2022 ), and the influence of digitally disadvantaged groups’ psychological characteristics and the effect of the overall social environment on their intentions to use have largely been ignored. To fill this gap, we constructed a UTAUT-based model for intentions to use the new public infrastructure that involved a mixing of physical and virtual spaces. We considered the mechanisms influencing digitally disadvantaged groups’ use of the new public infrastructure, considering them from the perspectives of individual, social, and technical factors. We processed and analysed 337 valid samples using PLS-SEM. The results showed that there were significant correlations between the six user factor variables and intention to use the new public infrastructure. In addition, for digitally disadvantaged groups, different degrees of social interaction anxiety had significantly different effects on the impacts of performance expectancy, psychological reactance, perceived marketplace influence, and effort expectancy on intention to use, while there were no differences in the impacts of perceived institutional support and facilitating conditions on intention to use.

In the theoretical value, we build on previous scholarly research on the conceptualisation of new public infrastructures, mixed physical and virtual spaces (Aslesen et al. 2019 ; Cocciolo, 2010 ), arguing for user, social and technological dimensions influencing the use of new public infrastructures by digitally disadvantaged groups in mixed physical and virtual spaces, and for the moderating role of social interaction anxiety. Meanwhile, this study prospectively explores the new phenomenon of digitally disadvantaged groups using new public infrastructures in mixed physical and virtual spaces, which paves the way for future scholars to explore the field both in theory and literature. In the practical value, the research findings will be helpful in promoting effective government policies and corporate designs and in prompting the development of a new public infrastructure that better meets the needs of digitally disadvantaged groups. Moreover, this study will help to direct social and government attention to the problems that exist in the use of new public infrastructures by digitally disadvantaged groups. It will have a significant implication for the future development of smart cities and urban digital inclusiveness in China and worldwide.

Data availability

The datasets generated during and/or analysed during the current study are not publicly available due to the confidentiality of the respondents’ information but are available from the corresponding author upon reasonable request for academic purposes only.

Abbad MMM (2021) Using the UTAUT model to understand students’ usage of e-learning systems in developing countries. Educ. Inf. Technol. 26(6):7205–7224. https://doi.org/10.1007/s10639-021-10573-5

Article   Google Scholar  

Acharya B, Lee J, Moon H (2022) Preference heterogeneity of local government for implementing ICT infrastructure and services through public-private partnership mechanism. Socio-Economic Plan. Sci. 79(9):101103. https://doi.org/10.1016/j.seps.2021.101103

Ahn MJ, Chen YC (2022) Digital transformation toward AI-augmented public administration: the perception of government employees and the willingness to use AI in government. Gov. Inf. Q. 39(2):101664. https://doi.org/10.1016/j.giq.2021.101664

Akdim K, Casalo LV, Flavián C (2022) The role of utilitarian and hedonic aspects in the continuance intention to use social mobile apps. J. Retail. Consum. Serv. 66:102888. https://doi.org/10.1016/j.jretconser.2021.102888

Al-Masri AN, Ijeh A, Nasir M (2019) Smart city framework development: challenges and solutions. Smart Technologies and Innovation for a Sustainable Future, Cham

Google Scholar  

Alalwan AA, Dwivedi YK, Rana NP (2017) Factors influencing adoption of mobile banking by Jordanian bank customers: extending UTAUT2 with trust. Int. J. Inf. Manag. 37(3):99–110. https://doi.org/10.1016/j.ijinfomgt.2017.01.002

Ali A, Li C, Hussain A, Bakhtawar (2020) Hedonic shopping motivations and obsessive–compulsive buying on the internet. Glob. Bus. Rev. 25(1):198–215. https://doi.org/10.1177/0972150920937535

Ali U, Mehmood A, Majeed MF, Muhammad S, Khan MK, Song HB, Malik KM (2019) Innovative citizen’s services through public cloud in Pakistan: user’s privacy concerns and impacts on adoption. Mob. Netw. Appl. 24(1):47–68. https://doi.org/10.1007/s11036-018-1132-x

Almaiah MA, Alamri MM, Al-Rahmi W (2019) Applying the UTAUT model to explain the students’ acceptance of mobile learning system in higher education. IEEE Access 7:174673–174686. https://doi.org/10.1109/access.2019.2957206

Annaswamy TM, Verduzco-Gutierrez M, Frieden L (2020) Telemedicine barriers and challenges for persons with disabilities: COVID-19 and beyond. Disabil Health J 13(4):100973. https://doi.org/10.1016/j.dhjo.2020.100973.3

Article   PubMed   PubMed Central   Google Scholar  

Aslesen HW, Martin R, Sardo S (2019) The virtual is reality! On physical and virtual space in software firms’ knowledge formation. Entrepreneurship Regional Dev. 31(9-10):669–682. https://doi.org/10.1080/08985626.2018.1552314

Bagozzi RP, Phillips LW (1982) Representing and testing organizational theories: a holistic construal. Adm. Sci. Q. 27(3):459–489. https://doi.org/10.2307/2392322

Bai B, Guo ZQ (2022) Understanding users’ continuance usage behavior towards digital health information system driven by the digital revolution under COVID-19 context: an extended UTAUT model. Psychol. Res. Behav. Manag. 15:2831–2842. https://doi.org/10.2147/prbm.S364275

Bélanger F, Carter L (2008) Trust and risk in e-government adoption. J. Strategic Inf. Syst. 17(2):165–176. https://doi.org/10.1016/j.jsis.2007.12.002

Blasi S, Ganzaroli A, De Noni I (2022) Smartening sustainable development in cities: strengthening the theoretical linkage between smart cities and SDGs. Sustain. Cities Soc. 80:103793. https://doi.org/10.1016/j.scs.2022.103793

Botelho FHF (2021) Accessibility to digital technology: virtual barriers, real opportunities. Assistive Technol. 33:27–34. https://doi.org/10.1080/10400435.2021.1945705

Brehm, JW (1966). A theory of psychological reactance . Academic Press

Buckingham SA, Walker T, Morrissey K, Smartline Project T (2022) The feasibility and acceptability of digital technology for health and wellbeing in social housing residents in Cornwall: a qualitative scoping study. Digital Health 8:20552076221074124. https://doi.org/10.1177/20552076221074124

Bygrave W, Minniti M (2000) The social dynamics of entrepreneurship. Entrepreneurship Theory Pract. 24(3):25–36. https://doi.org/10.1177/104225870002400302

Cai Y, Qi W, Yi FM (2023) Smartphone use and willingness to adopt digital pest and disease management: evidence from litchi growers in rural China. Agribusiness 39(1):131–147. https://doi.org/10.1002/agr.21766

Carney F, Kandt J (2022) Health, out-of-home activities and digital inclusion in later life: implications for emerging mobility services. Journal of Transport & Health 24:101311. https://doi.org/10.1016/j.jth.2021.101311

Chao CM (2019) Factors determining the behavioral intention to use mobile learning: an application and extension of the UTAUT model. Front. Psychol. 10:1652. https://doi.org/10.3389/fpsyg.2019.01652

Chen HY, Chen HY, Zhang W, Yang CD, Cui HX (2021) Research on marketing prediction model based on Markov Prediction. Wirel. Commun. Mob. Comput. 2021(9):4535181. https://doi.org/10.1155/2021/4535181

Chen J, Cui MY, Levinson D (2024) The cost of working: measuring physical and virtual access to jobs. Int. J. Urban Sci. 28(2):318–334. https://doi.org/10.1080/12265934.2023.2253208

Chen JX, Wang T, Fang ZY, Wang HT (2023) Research on elderly users’ intentions to accept wearable devices based on the improved UTAUT model. Front. Public Health 10(12):1035398. https://doi.org/10.3389/fpubh.2022.1035398

Chen KF, Guaralda M, Kerr J, Turkay S (2024) Digital intervention in the city: a conceptual framework for digital placemaking. Urban Des. Int. 29(1):26–38. https://doi.org/10.1057/s41289-022-00203-y

Chen L, Zhang H (2021) Strategic authoritarianism: the political cycles and selectivity of China’s tax-break policy. Am. J. Political Sci. 65(4):845–861. https://doi.org/10.1111/ajps.12648

Chiu YTH, Hofer KM (2015) Service innovation and usage intention: a cross-market analysis. J. Serv. Manag. 26(3):516–538. https://doi.org/10.1108/josm-10-2014-0274

Chou SW, Min HT, Chang YC, Lin CT (2010) Understanding continuance intention of knowledge creation using extended expectation-confirmation theory: an empirical study of Taiwan and China online communities. Behav. Inf. Technol. 29(6):557–570. https://doi.org/10.1080/01449290903401986

Cocciolo A (2010) Alleviating physical space constraints using virtual space? A study from an urban academic library. Libr. Hi Tech. 28(4):523–535. https://doi.org/10.1108/07378831011096204

Davidson CA, Willner CJ, van Noordt SJR, Banz BC, Wu J, Kenney JG, Johannesen JK, Crowley MJ (2019) One-month stability of cyberball post-exclusion ostracism distress in Adolescents. J. Psychopathol. Behav. Assess. 41(3):400–408. https://doi.org/10.1007/s10862-019-09723-4

Dogruel L, Joeckel S, Bowman ND (2015) The use and acceptance of new media entertainment technology by elderly users: development of an expanded technology acceptance model. Behav. Inf. Technol. 34(11):1052–1063. https://doi.org/10.1080/0144929x.2015.1077890

Fang ML, Canham SL, Battersby L, Sixsmith J, Wada M, Sixsmith A (2019) Exploring privilege in the digital divide: implications for theory, policy, and practice. Gerontologist 59(1):E1–E15. https://doi.org/10.1093/geront/gny037

Article   PubMed   Google Scholar  

Fergus TA, Valentiner DP, McGrath PB, Gier-Lonsway SL, Kim HS (2012) Short forms of the social interaction anxiety scale and the social phobia scale. J. Personal. Assess. 94(3):310–320. https://doi.org/10.1080/00223891.2012.660291

Garone A, Pynoo B, Tondeur J, Cocquyt C, Vanslambrouck S, Bruggeman B, Struyven K (2019) Clustering university teaching staff through UTAUT: implications for the acceptance of a new learning management system. Br. J. Educ. Technol. 50(5):2466–2483. https://doi.org/10.1111/bjet.12867

Gu QH, Iop (2020) Frame-based conceptual model of smart city’s applications in China. International Conference on Green Development and Environmental Science and Technology (ICGDE), Changsha, CHINA

Book   Google Scholar  

Guo MJ, Liu YH, Yu HB, Hu BY, Sang ZQ (2016) An overview of smart city in China. China Commun. 13(5):203–211. https://doi.org/10.1109/cc.2016.7489987

Gursoy D, Chi OHX, Lu L, Nunkoo R (2019) Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int. J. Inf. Manag. 49:157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008

Hair, JF, Hult, GTM, Ringle, CM, & Sarstedt, M (2022). A primer on partial least squares structural equation modeling (PLS-SEM) (Third edition. ed.). SAGE Publications, Inc

Hair Jr JF, Sarstedt M, Hopkins L, Kuppelwieser VG (2014) Partial least squares structural equation modeling (PLS-SEM): an emerging tool in business research. Eur. Bus. Rev. 26(2):106–121. https://doi.org/10.1108/ebr-10-2013-0128

Hair JF, Risher JJ, Sarstedt M, Ringle CM (2019) When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31(1):2–24. https://doi.org/10.1108/ebr-11-2018-0203

Heerink M, Kröse B, Evers V, Wielinga B (2010) Assessing acceptance of assistive social agent technology by older adults: the almere model. Int. J. Soc. Robot. 2(4):361–375. https://doi.org/10.1007/s12369-010-0068-5

Henseler J, Ringle CM, Sarstedt M (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43(1):115–135. https://doi.org/10.1007/s11747-014-0403-8

Henseler, J, Ringle, CM, & Sinkovics, RR (2009). The use of partial least squares path modeling in international marketing. In RR Sinkovics & PN Ghauri (Eds.), New Challenges to International Marketing (Vol. 20, pp. 277-319). Emerald Group Publishing Limited. https://doi.org/10.1108/S1474-7979 (2009)0000020014

Hoque R, Sorwar G (2017) Understanding factors influencing the adoption of mHealth by the elderly: an extension of the UTAUT model. Int. J. Med. Inform. 101:75–84. https://doi.org/10.1016/j.ijmedinf.2017.02.002

Hsu CW, Peng CC (2022) What drives older adults’ use of mobile registration apps in Taiwan? An investigation using the extended UTAUT model. Inform. Health Soc. Care 47(3):258–273. https://doi.org/10.1080/17538157.2021.1990299

Hu J, Zhang H, Irfan M (2023) How does digital infrastructure construction affect low-carbon development? A multidimensional interpretation of evidence from China. J. Clean. Prod. 396(9):136467. https://doi.org/10.1016/j.jclepro.2023.136467

Hu TF, Guo RS, Chen C (2022) Understanding mobile payment adaption with the integrated model of UTAUT and MOA model. 2022 Portland International Conference on Management of Engineering and Technology (PICMET), Portland, OR, USA

Hutchins N, Allen A, Curran M, Kannis-Dymand L (2021) Social anxiety and online social interaction. Aust. Psychologist 56(2):142–153. https://doi.org/10.1080/00050067.2021.1890977

Iancu I, Iancu B (2020) Designing mobile technology for elderly. A theoretical overview. Technol. Forecast. Soc. Change 155(9):119977. https://doi.org/10.1016/j.techfore.2020.119977

Jakonen, OI (2024). Smart cities, virtual futures? - Interests of urban actors in mediating digital technology and urban space in Tallinn, Estonia. Urban Studies , 17. https://doi.org/10.1177/00420980241245871

Ji TT, Chen JH, Wei HH, Su YC (2021) Towards people-centric smart city development: investigating the citizens’ preferences and perceptions about smart-city services in Taiwan. Sustain. Cities Soc. 67(14):102691. https://doi.org/10.1016/j.scs.2020.102691

Jöreskog KG (1971) Simultaneous factor analysis in several populations. Psychometrika 36(4):409–426. https://doi.org/10.1007/BF02291366

Joshi Y, Uniyal DP, Sangroya D (2021) Investigating consumers’ green purchase intention: examining the role of economic value, emotional value and perceived marketplace influence. J. Clean. Prod. 328(8):129638. https://doi.org/10.1016/j.jclepro.2021.129638

Kadylak T, Cotten SR (2020) United States older adults’ willingness to use emerging technologies. Inf. Commun. Soc. 23(5):736–750. https://doi.org/10.1080/1369118x.2020.1713848

Kaihlanen AM, Virtanen L, Buchert U, Safarov N, Valkonen P, Hietapakka L, Hörhammer I, Kujala S, Kouvonen A, Heponiemi T (2022) Towards digital health equity-a qualitative study of the challenges experienced by vulnerable groups in using digital health services in the COVID-19 era. BMC Health Services Research 22(1):188. https://doi.org/10.1186/s12913-022-07584-4

Khan HH, Malik MN, Zafar R, Goni FA, Chofreh AG, Klemes JJ, Alotaibi Y (2020) Challenges for sustainable smart city development: a conceptual framework. Sustain. Dev. 28(5):1507–1518. https://doi.org/10.1002/sd.2090

Kim S, Park H (2013) Effects of various characteristics of social commerce (s-commerce) on consumers’ trust and trust performance. Int. J. Inf. Manag. 33(2):318–332. https://doi.org/10.1016/j.ijinfomgt.2012.11.006

Leary RB, Vann RJ, Mittelstaedt JD (2019) Perceived marketplace influence and consumer ethical action. J. Consum. Aff. 53(3):1117–1145. https://doi.org/10.1111/joca.12220

Leary RB, Vann RJ, Mittelstaedt JD, Murphy PE, Sherry JF (2014) Changing the marketplace one behavior at a time: perceived marketplace influence and sustainable consumption. J. Bus. Res. 67(9):1953–1958. https://doi.org/10.1016/j.jbusres.2013.11.004

Lee DD, Arya LA, Andy UU, Sammel MD, Harvie HS (2019) Willingness of women with pelvic floor disorders to use mobile technology to communicate with their health care providers. Female Pelvic Med. Reconstructive Surg. 25(2):134–138. https://doi.org/10.1097/spv.0000000000000668

Lee SW, Sung HJ, Jeon HM (2019) Determinants of continuous intention on food delivery apps: extending UTAUT2 with information quality. Sustainability 11(11):3141. https://doi.org/10.3390/su11113141 . 15

Li Mo QZ, Bai BY (2023) Height dissatisfaction and loneliness among adolescents: the chain mediating role of social anxiety and social support. Curr. Psychol. 42(31):27296–27304. https://doi.org/10.1007/s12144-022-03855-9

Liébana-Cabanillas F, Japutra A, Molinillo S, Singh N, Sinha N (2020) Assessment of mobile technology use in the emerging market: analyzing intention to use m-payment services in India. Telecommun. Policy 44(9):102009. https://doi.org/10.1016/j.telpol.2020.102009 . 17

Liu HD, Zhao HF (2022) Upgrading models, evolutionary mechanisms and vertical cases of service-oriented manufacturing in SVC leading enterprises: product-development and service-innovation for industry 4.0. Humanities Soc. Sci. Commun. 9(1):387. https://doi.org/10.1057/s41599-022-01409-9 . 24

Liu ZL, Wang Y, Xu Q, Yan T, Iop (2017) Study on smart city construction of Jiujiang based on IOT technology. 3rd International Conference on Advances in Energy, Environment and Chemical Engineering (AEECE), Chengdu, CHINA

Magee WJ, Eaton WW, Wittchen H-U, McGonagle KA, Kessler RC (1996) Agoraphobia, simple phobia, and social phobia in the National Comorbidity Survey. Arch. Gen. Psychiatry 53(2):159–168

Article   CAS   PubMed   Google Scholar  

Martins R, Oliveira T, Thomas M, Tomás S (2019) Firms’ continuance intention on SaaS use - an empirical study. Inf. Technol. People 32(1):189–216. https://doi.org/10.1108/itp-01-2018-0027

Miron AM, Brehm JW (2006) Reactance theory - 40 Years later. Z. Fur Sozialpsychologie 37(1):9–18. https://doi.org/10.1024/0044-3514.37.1.9

Mogaji E, Balakrishnan J, Nwoba AC, Nguyen NP (2021) Emerging-market consumers’ interactions with banking chatbots. Telematics and Informatics 65:101711. https://doi.org/10.1016/j.tele.2021.101711

Mogaji E, Bosah G, Nguyen NP (2023) Transport and mobility decisions of consumers with disabilities. J. Consum. Behav. 22(2):422–438. https://doi.org/10.1002/cb.2089

Mogaji E, Nguyen NP (2021) Transportation satisfaction of disabled passengers: evidence from a developing country. Transportation Res. Part D.-Transp. Environ. 98:102982. https://doi.org/10.1016/j.trd.2021.102982

Mora L, Gerli P, Ardito L, Petruzzelli AM (2023) Smart city governance from an innovation management perspective: theoretical framing, review of current practices, and future research agenda. Technovation 123:102717. https://doi.org/10.1016/j.technovation.2023.102717

Mulvale G, Moll S, Miatello A, Robert G, Larkin M, Palmer VJ, Powell A, Gable C, Girling M (2019) Codesigning health and other public services with vulnerable and disadvantaged populations: insights from an international collaboration. Health Expectations 22(3):284–297. https://doi.org/10.1111/hex.12864

Narzt W, Mayerhofer S, Weichselbaum O, Pomberger G, Tarkus A, Schumann M (2016) Designing and evaluating barrier-free travel assistance services. 3rd International Conference on HCI in Business, Government, and Organizations - Information Systems (HCIBGO) Held as Part of 18th International Conference on Human-Computer Interaction (HCI International), Toronto, CANADA

Nguyen GD, Dao THT (2024) Factors influencing continuance intention to use mobile banking: an extended expectation-confirmation model with moderating role of trust. Humanities Soc. Sci. Commun. 11(1):276. https://doi.org/10.1057/s41599-024-02778-z

Nicolas C, Kim J, Chi S (2020) Quantifying the dynamic effects of smart city development enablers using structural equation modeling. Sustain. Cities Soc. 53:101916. https://doi.org/10.1016/j.scs.2019.101916

Paköz MZ, Sözer C, Dogan A (2022) Changing perceptions and usage of public and pseudo-public spaces in the post-pandemic city: the case of Istanbul. Urban Des. Int. 27(1):64–79. https://doi.org/10.1057/s41289-020-00147-1

Perez AJ, Siddiqui F, Zeadally S, Lane D (2023) A review of IoT systems to enable independence for the elderly and disabled individuals. Internet Things 21:100653. https://doi.org/10.1016/j.iot.2022.100653

Purohit S, Arora R, Paul J (2022) The bright side of online consumer behavior: continuance intention for mobile payments. J. Consum. Behav. 21(3):523–542. https://doi.org/10.1002/cb.2017

Ringle CM, Sarstedt M, Straub DW (2012) Editor’s Comments: A Critical Look at the Use of PLS-SEM in “MIS Quarterly”. MIS Q. 36(1):III–XIV

Roubroeks MAJ, Ham JRC, Midden CJH (2010) The dominant robot: threatening robots cause psychological reactance, especially when they have incongruent goals. 5th International Conference on Persuasive Technology, Copenhagen, DENMARK

Saparudin M, Rahayu A, Hurriyati R, Sultan MA, Ramdan AM, Ieee (2020) Consumers’ continuance intention use of mobile banking in Jakarta: extending UTAUT models with trust. 5th International Conference on Information Management and Technology (ICIMTech), Bandung, Indonesia

Scavarelli A, Arya A, Teather RJ (2021) Virtual reality and augmented reality in social learning spaces: a literature review. Virtual Real. 25(1):257–277. https://doi.org/10.1007/s10055-020-00444-8

Schneider AB, Leonard B (2022) From anxiety to control: mask-wearing, perceived marketplace influence, and emotional well-being during the COVID-19 pandemic. J. Consum. Aff. 56(1):97–119. https://doi.org/10.1111/joca.12412

Schou J, Pors AS (2019) Digital by default? A qualitative study of exclusion in digitalised welfare. Soc. Policy Adm. 53(3):464–477. https://doi.org/10.1111/spol.12470

Schultz LT, Heimberg RG (2008) Attentional focus in social anxiety disorder: potential for interactive processes. Clin. Psychol. Rev. 28(7):1206–1221. https://doi.org/10.1016/j.cpr.2008.04.003

Shibusawa H (2000) Cyberspace and physical space in an urban economy. Pap. Regional Sci. 79(3):253–270. https://doi.org/10.1007/pl00013610

Susanto A, Mahadika PR, Subiyakto A, Nuryasin, Ieee (2018) Analysis of electronic ticketing system acceptance using an extended unified theory of acceptance and use of technology (UTAUT). 6th International Conference on Cyber and IT Service Management (CITSM), Parapat, Indonesia

Susilawati C, Wong J, Chikolwa B (2010) Public participation, values and interests in the procurement of infrastructure projects in Australia: a review and future research direction. 2010 International Conference on Construction and Real Estate Management, Brisbane, Australia

Tam C, Santos D, Oliveira T (2020) Exploring the influential factors of continuance intention to use mobile Apps: extending the expectation confirmation model. Inf. Syst. Front. 22(1):243–257. https://doi.org/10.1007/s10796-018-9864-5

Teo T, Zhou MM, Fan ACW, Huang F (2019) Factors that influence university students’ intention to use Moodle: a study in Macau. EtrD-Educ. Technol. Res. Dev. 67(3):749–766. https://doi.org/10.1007/s11423-019-09650-x

Thapa S, Nielsen JB, Aldahmash AM, Qadri FR, Leppin A (2021) Willingness to use digital health tools in patient care among health care professionals and students at a university hospital in Saudi Arabia: quantitative cross-sectional survey. JMIR Med. Educ. 7(1):e18590. https://doi.org/10.2196/18590

Tian X, Solomon DH, Brisini KS (2020) How the comforting process fails: psychological reactance to support messages. J. Commun. 70(1):13–34. https://doi.org/10.1093/joc/jqz040

Troisi O, Fenza G, Grimaldi M, Loia F (2022) Covid-19 sentiments in smart cities: the role of technology anxiety before and during the pandemic. Computers in Human Behavior 126:106986. https://doi.org/10.1016/j.chb.2021.106986

Venkatesh V, Brown SA (2001) A longitudinal investigation of personal computers in homes: adoption determinants and emerging challenges. MIS Q. 25(1):71–102. https://doi.org/10.2307/3250959

Venkatesh V, Morris MG, Davis GB, Davis FD (2003) User acceptance of information technology: toward a unified view. MIS Q. 27(3):425–478. https://doi.org/10.2307/30036540

Vinnikova A, Lu LD, Wei JC, Fang GB, Yan J (2020) The Use of smartphone fitness applications: the role of self-efficacy and self-regulation. International Journal of Environmental Research and Public Health 17(20):7639. https://doi.org/10.3390/ijerph17207639

Wang BA, Zhang R, Wang Y (2021) Mechanism influencing older people’s willingness to use intelligent aged-care products. Healthcare 9(7):864. https://doi.org/10.3390/healthcare9070864

Wang CHJ, Steinfeld E, Maisel JL, Kang B (2021) Is your smart city inclusive? Evaluating proposals from the US department of transportation’s smart city challenge. Sustainable Cities and Society 74:103148. https://doi.org/10.1016/j.scs.2021.103148

Wang XY (2007) Mutually augmented virtual environments for architecural design and collaboration. 12th Computer-Aided Architectural Design Futures Conference, Sydney, Australia

Werner P, Karnieli E (2003) A model of the willingness to use telemedicine for routine and specialized care. J. Telemed. Telecare 9(5):264–272. https://doi.org/10.1258/135763303769211274

Yadav J, Saini AK, Yadav AK (2019) Measuring citizens engagement in e-Government projects - Indian perspective. J. Stat. Manag. Syst. 22(2):327–346. https://doi.org/10.1080/09720510.2019.1580908

Yang CC, Liu C, Wang YS (2023) The acceptance and use of smartphones among older adults: differences in UTAUT determinants before and after training. Libr. Hi Tech. 41(5):1357–1375. https://doi.org/10.1108/lht-12-2021-0432

Yang K, Forney JC (2013) The moderating role of consumer technology anxiety in mobile shopping adoption: differential effects of facilitating conditions and social influences. J. Electron. Commer. Res. 14(4):334–347

Yeung HL, Hao P (2024) Telecommuting amid Covid-19: the Governmobility of work-from-home employees in Hong Kong. Cities 148:104873. https://doi.org/10.1016/j.cities.2024.104873

Zander V, Gustafsson C, Stridsberg SL, Borg J (2023) Implementation of welfare technology: a systematic review of barriers and facilitators. Disabil. Rehabilitation-Assistive Technol. 18(6):913–928. https://doi.org/10.1080/17483107.2021.1938707

Zeebaree M, Agoyi M, Agel M (2022) Sustainable adoption of e-government from the UTAUT perspective. Sustainability 14(9):5370. https://doi.org/10.3390/su14095370

Zhang YX, Liu HX, Kang SC, Al-Hussein M (2020) Virtual reality applications for the built environment: Research trends and opportunities. Autom. Constr. 118:103311. https://doi.org/10.1016/j.autcon.2020.103311

Zhong YP, Oh S, Moon HC (2021) Service transformation under industry 4.0: investigating acceptance of facial recognition payment through an extended technology acceptance model. Technology in Society 64:101515. https://doi.org/10.1016/j.techsoc.2020.101515

Zhu DH, Deng ZZ (2021) Effect of social anxiety on the adoption of robotic training partner. Cyberpsychology Behav. Soc. Netw. 24(5):343–348. https://doi.org/10.1089/cyber.2020.0179

Download references

Acknowledgements

This research was supported by the National Social Science Foundation of China, grant number 22BGJ037; the Fundamental Research Funds for the Provincial Universities of Zhejiang, grant number GB202301004; and the Zhejiang Province University Students Science and Technology Innovation Activity Program, grant numbers 2023R403013, 2023R403010 & 2023R403086.

Author information

These authors contributed equally: Chengxiang Chu, Zhenyang Shen, Hanyi Xu.

Authors and Affiliations

School of Management, Zhejiang University of Technology, Hangzhou, China

Chengxiang Chu, Zhenyang Shen, Qizhi Wei & Cong Cao

Law School, Zhejiang University of Technology, Hangzhou, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualisation: C.C., CX.C. and ZY.S.; Methodology: CX.C. and HY.X.; Validation: ZY.S. and QZ.W.; Formal analysis: HY.X.; Investigation: CX.C., ZY.S. and HY.X.; Resources: C.C.; Data curation: CX.C. and HY.X.; Writing–original draft preparation: CX.C, ZY.S., HY.X. and QZ.W.; Writing–review & editing: CX.C and C.C.; Visualisation: ZY.S. and HY.X.; Supervision: C.C.; Funding acquisition: C.C., CX.C. and ZY.S.; all authors approved the final manuscript to be submitted.

Corresponding author

Correspondence to Cong Cao .

Ethics declarations

Ethical approval.

Ethical approval for the involvement of human subjects in this study was granted by Institutional Review Board of School of Management, Zhejiang University of Technology, China, Reference number CC-2023-1-0008-0005-SOM-ZJUT.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. Measurement items

Factors

Items

Source

Performance Expectancy

1. Use of ‘accessibility infrastructure’ helps me to handle affairs quickly and efficiently.

Ali et al. ( )

2. ‘Accessibility infrastructure’ ensures the accessibility and availability of facilities for handling my affairs.

3. ‘Accessibility infrastructure’ save time in handling my affairs.

4. ‘Accessibility infrastructure’ saves effort in handling my affairs.

Psychological Reactance

1. The existence or sudden intervention of ‘accessibility infrastructure’ makes me feel angry.

Tian et al. ( )

2. The existence or sudden intervention of ‘accessibility infrastructure’ makes me feel irritated.

3. I criticised its existence while using the ‘accessibility infrastructure’.

4. When using the ‘accessibility infrastructure’, I preferred the original state.

Perceived Institutional Support

1. My country helps me use the ‘accessibility infrastructure’.

Almaiah et al. ( ); Garone et al. ( )

2. Public institutions that are important to me think that I should use the ‘accessibility infrastructure’.

3. I believe that my country supports the use of the ‘accessibility infrastructure’.

Perceived Marketplace Influence

1. I believe that many people in my country use the ‘accessibility infrastructure’.

Almaiah et al. ( ); Garone et al. ( )

2. I believe that many people in my country desire to use the ‘accessibility infrastructure’.

3. I believe that many people in my country approve of using the ‘accessibility infrastructure’.

Effort Expectancy

1. My interactions with the ‘accessibility infrastructure’ are clear and understandable.

Venkatesh et al. ( )

2. It is easy for me to become skilful in using the ‘accessibility infrastructure’.

3. Learning to operate the ‘accessibility infrastructure’ is easy for me.

Facilitating Conditions

1. I have the resources necessary to use the ‘accessibility infrastructure’.

Venkatesh et al. ( )

2. I have the knowledge necessary to use the ‘accessibility infrastructure’.

3. The ‘accessibility infrastructure’ is not compatible with other infrastructure I use.

4. A specific person (or group) is available to assist me with ‘accessibility infrastructure’ difficulties.

Social Interaction Anxiety

1. I feel tense if talk about myself or my feelings.

Fergus et al. ( )

2. I tense up if meet an acquaintance in the street.

3. I feel tense if I am alone with one other person.

4. I feel nervous mixing with people I don’t know well.

5. I worry about being ignored when in a group.

6. I feel tense mixing in a group.

Intention to Use

1. If I had access to the ‘accessibility infrastructure’, I would intend to use it.

Teo et al. ( )

2. If I had access to the ‘accessibility infrastructure’ in the coming months, I believe that I would use it rather than taking other measures.

3. I expect that I will use the ‘accessibility infrastructure’ in my daily life in the future.

4. I plan to use the ‘accessibility infrastructure’ in my daily life in the future.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chu, C., Shen, Z., Xu, H. et al. How to avoid sinking in swamp: exploring the intentions of digitally disadvantaged groups to use a new public infrastructure that combines physical and virtual spaces. Humanit Soc Sci Commun 11 , 1135 (2024). https://doi.org/10.1057/s41599-024-03684-0

Download citation

Received : 28 October 2023

Accepted : 29 August 2024

Published : 04 September 2024

DOI : https://doi.org/10.1057/s41599-024-03684-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

hypotheses user research

A study of the effect of viewing online health popular science information on users' willingness to change health behaviors – based on the psychological distance perspective

  • Published: 05 September 2024

Cite this article

hypotheses user research

  • Jingfang Liu 1 &
  • Shiqi Wang 1  

Along with increased health awareness and the advent of the information age, online health popular science information (OHPSI) has received more attention. However, it is unknown how the lots of online health information influences users to change unhealthy behavioral habits. Therefore, based on the psychological distance perspective, our research investigated the effect of viewing online health information on users' willingness to change their health behaviors in the future. In addition, this study also introduced the protection motivation theory to further investigate the mediating effect of protection motivation in the mechanisms of psychological distance in online health information. The data of the study were obtained by the research method of questionnaire survey and the proposed hypotheses were validated using Smartpls software. 87.28% of the respondents in this study's survey sample were aged 18–40 years old, and people in this age group have higher pressure from study and work, and live a fast-paced life with less free time, which makes them more likely to pay attention to OHPSI to improve their health. Therefore, the age group of the sample of this study is in line with the research purpose of this paper, which is conducive to enhancing the authenticity and reliability of the conclusions of this study. The conclusions of the study showed that the temporal, social, hypothetical and experiential distances in psychological distance can positively influence users' self-protection motivation. And protection motivation has a positive effect on users' willingness to change health behaviors. In addition, protection motivation can completely mediate the influence of psychological distance on users' willingness to change health behaviors after viewing online health information. The research not only expands the scope of application of construal level theory and protection motivation theory, but also has significant impact for creators of OHPSI and public health departments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

hypotheses user research

Explore related subjects

  • Artificial Intelligence

Data availability

Data sharing not applicable to this article as privacy and ethical restrictions.

Abbreviations

  • Online health popular science information
  • Construal level theory

Protection motivation theory

Temporal distance

Social distance

Hypothetical distance

Experiential distance

  • Protection motivation

Willingness to change health behaviors

Partial least squares

Composite reliability

Average variance extracted

Akhtar, N., Siddiqi, U. I., & Islam, T. (2023). Do perceived threats to psychological distance influence tourists' reactance and online Airbnb booking intentions during COVID-19? Kybernetes . https://doi.org/10.1108/k-03-2023-0508

Bar-Anan, Y., Liberman, N., & Trope, Y. (2006). The association between psychological distance and construal level: Evidence from an implicit association test. Journal of Experimental Psychology: General, 135 (4), 609–622. https://doi.org/10.1037/0096-3445.135.4.609

Article   PubMed   Google Scholar  

Blauza, S., Heuckmann, B., Kremer, K., & Buessing, A. G. (2023). Psychological distance towards COVID-19: Geographical and hypothetical distance predict attitudes and mediate knowledge [Article]. Current Psychology, 42 (10), 8632–8643. https://doi.org/10.1007/s12144-021-02415-x

Boss, S. R., Galletta, D. F., Lowry, P. B., Moody, G. D., & Polak, P. (2015a). What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quarterly, 39 (4), 837–864.

Article   Google Scholar  

Boss, S. R., Galletta, D. F., Lowry, P. B., Moody, G. D., & Polak, P. (2015b). What do systems users have to fear? Using fear appeals to engender threats and fear that motivate protective security behaviors. MIS Quarterly, 39 (4), 837-U461. https://doi.org/10.25300/misq/2015/39.4.5

Brust, M., Gebhardt, W. A., van der Voorde, N. A. E., Numans, M. E., & Kiefte-de Jong, J. C. (2022). The development and validation of scales to measure the presence of a teachable moment following a cardiovascular disease event. Preventive Medicine Reports, 28 , 101876. https://doi.org/10.1016/j.pmedr.2022.101876

Article   PubMed   PubMed Central   Google Scholar  

Bujnowska-Fedak, M. M., & Wegierek, P. (2020). The impact of online health information on patient health behaviours and making decisions concerning health. International Journal of Environmental Research and Public Health, 17 (3), 880. https://doi.org/10.3390/ijerph17030880

Chen, F., Dai, S., Zhu, Y., & Xu, H. (2019). Will concerns for ski tourism promote pro-environmental behaviour? An implication of protection motivation theory. International Journal of Tourism Research, 22 (3), 303–313. https://doi.org/10.1002/jtr.2336

Chen, X. (2023). Online health communities influence people's health behaviors in the context of COVID-19. Plos One, 18 (4). https://doi.org/10.1371/journal.pone.0282368

Choi, J., Nelson, D., & Almanza, B. (2018). Food safety risk for restaurant management: Use of restaurant health inspection report to predict consumers’ behavioral intention. Journal of Risk Research, 22 (11), 1443–1457. https://doi.org/10.1080/13669877.2018.1501590

Diviani, N., van den Putte, B., Giani, S., & van Weert, J. C. M. (2015). Low health literacy and evaluation of online health information: A systematic review of the literature. Journal of Medical Internet Research, 17 (5), e112. https://doi.org/10.2196/jmir.4018

Dong, W., Lei, X., & Liu, Y. (2022). The mediating role of patients’ trust between web-based health information seeking and patients’ uncertainty in China: Cross-sectional web-based survey. Journal of Medical Internet Research, 24 (3), e25275. https://doi.org/10.2196/25275

Farooq, A., Laato, S., Islam, A., & Isoaho, J. (2021). Understanding the impact of information sources on COVID-19 related preventive measures in Finland. Technology in Society, 65 , 101573. https://doi.org/10.1016/j.techsoc.2021.101573

Fiedler, K. (2007). Construal level theory as an integrative framework for behavioral decision-making research and consumer psychology. Journal of Consumer Psychology, 17 (2), 101–106. https://doi.org/10.1016/s1057-7408(07)70015-3

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18 (1), 39–50.

Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example. Communications of the Association for Information Systems, 16 (1), 5.

Google Scholar  

Hair, J. F., Sarstedt, M., Ringle, C. M., & Mena, J. A. (2012). An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40 (3), 414–433. https://doi.org/10.1007/s11747-011-0261-6

Hale, T. M. (2013). Is there such a thing as an online health lifestyle?: Examining the relationship between social status, Internet access, and health behaviors. Information Communication & Society, 16 (4), 501–518. https://doi.org/10.1080/1369118x.2013.777759

Han, D., Duhachek, A., & Agrawal, N. (2016). Coping and construal level matching drives health message effectiveness via response efficacy or self-efficacy enhancement. Journal of Consumer Research, 43 (3), 429–447.

Hodges, P. W., Hall, L., Setchell, J., French, S., Kasza, J., Bennell, K., Hunter, D., Vicenzino, B., Crofts, S., & Dickson, C. (2021). Effect of a consumer-focused website for low back pain on health literacy, treatment choices, and clinical outcomes: Randomized controlled trial. Journal of Medical Internet Research, 23 (6), e27860.

Jones, C., Hine, D. W., & Marks, A. D. (2017). The future is now: Reducing psychological distance to increase public engagement with climate change. Risk Analysis, 37 (2), 331–341. https://doi.org/10.1111/risa.12601

Kim, S., & Jin, Y. (2020). Organizational threat appraisal by publics: The effects of perceived temporal distance on health crisis outcomes. International Journal of Communication, 14 , 4075–4095. <Go to ISI>://WOS:000616658300105

Kim, D. H., & Song, D. (2019). Can brand experience shorten consumers’ psychological distance toward the brand? The effect of brand experience on consumers’ construallevel. Journal of Brand Management, 26 (3), 255–267. https://doi.org/10.1057/s41262-018-0134-0

Lahiri, A., Jha, S. S., Chakraborty, A., Dobe, M., & Dey, A. (2021). Role of threat and coping appraisal in protection motivation for adoption of preventive behavior during COVID-19 pandemic. Frontiers in Public Health, 9 , 678566. https://doi.org/10.3389/fpubh.2021.678566

Li, Y. F., Song, Y. Y., Zhao, W., Guo, X. T., Ju, X. F., & Vogel, D. (2019). Exploring the role of online health community information in patients’ decisions to switch from online to offline medical services. International Journal of Medical Informatics, 130 , 103951. https://doi.org/10.1016/j.ijmedinf.2019.08.011

Liberman, N., & Trope, Y. (1998). The role of feasibility and desirability considerations in near and distant future decisions: A test of temporal construal theory. Journal of Personality and Social Psychology, 75 (1), 5–18. https://doi.org/10.1037/0022-3514.75.1.5

Liu, F., Fang, M., Cai, L., Su, M., & Wang, X. (2021). Consumer motivations for adopting omnichannel retailing: A safety-driven perspective in the context of COVID-19. Frontiers in Public Health, 9 , 708199. https://doi.org/10.3389/fpubh.2021.708199

Mao, Y., Chen, H., Wang, Y., Chen, S., Gao, J., Dai, J., Jia, Y., Xiao, Q., Zheng, P., & Fu, H. (2021). How can the uptake of preventive behaviour during the COVID-19 outbreak be improved? An online survey of 4827 Chinese residents. BMJ Open, 11 (2), e042954. https://doi.org/10.1136/bmjopen-2020-042954

Massara, F., & Severino, F. (2013). Psychological distance in the heritage experience. Annals of Tourism Research, 42 , 108–129. https://doi.org/10.1016/j.annals.2013.01.005

Negrone, A. J., Caldwell, P. H., & Scott, K. M. (2023). COVID-19 and Dr. Google: Parents’ changing experience using online health information about their children’s health during the pandemic. Journal of Paediatrics and Child Health, 59 (3), 512–518. https://doi.org/10.1111/jpc.16339

Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change1. The Journal of Psychology, 91 (1), 93–114.

Shanshan, S., Chenhui, D., & Lijuan, L. (2022). Metaphor and board writing matter: The mediating roles of psychological distance and immersion in video lectures [Article]. Computers & Education, 191 , 104630. https://doi.org/10.1016/j.compedu.2022.104630

Shin, M., Kim, Y., & Park, S. (2020). Effect of psychological distance on intention in self-affirmation theory. Psychological Reports, 123 (6), 2101–2124. https://doi.org/10.1177/0033294119856547

Soroya, S. H., Nazir, M., & Faiola, A. (2022). Impact of health-related internet use on disease management behavior of chronic patients: Mediating role of perceived credibility of online information. Information Development . https://doi.org/10.1177/02666669221144622

Sousa, P., Martinho, R., Reis, C. I., Dias, S. S., Gaspar, P. J. S., Dixe, M. D., Luis, L. S., & Ferreira, R. (2020). Controlled trial of an mHealth intervention to promote healthy behaviours in adolescence (TeenPower): Effectiveness analysis. Journal of Advanced Nursing, 76 (4), 1057–1068. https://doi.org/10.1111/jan.14301

Spence, A., Poortinga, W., & Pidgeon, N. (2012). The psychological distance of climate change. Risk Analysis, 32 (6), 957–972. https://doi.org/10.1111/j.1539-6924.2011.01695.x

Wang, S., Hurlstone, M. J., Leviston, Z., Walker, I., & Lawrence, C. (2019b). Climate change from a distance: An analysis of construal level and psychological distance from climate change. Frontiers in Psychology, 10 , 230. https://doi.org/10.3389/fpsyg.2019.00230

Wang, X., Duan, X., Li, S., & Bu, T. (2022). Effects of message framing, psychological distance, and risk perception on exercise attitude in Chinese adolescents. Frontiers in Pediatrics, 10 , 991419. https://doi.org/10.3389/fped.2022.991419

Wang, J., Liu-Lastres, B., Ritchie, B. W., & Mills, D. J. (2019a). Travellers' self-protections against health risks: An application of the full protection motivation theory. Annals of Tourism Research, 78 . https://doi.org/10.1016/j.annals.2019.102743

Weiss, K., & Konig, L. M. (2023). Does the medium matter? Comparing the effectiveness of videos, podcasts and online articles in nutrition communication. Applied Psychology-Health and Well Being, 15 (2), 669–685. https://doi.org/10.1111/aphw.12404

White, A. E., Johnson, K. A., & Kwan, V. S. Y. (2014). Four ways to infect me: Spatial, temporal, social, and probability distance influence evaluations of disease threat. Social Cognition, 32 (3), 239–255. https://doi.org/10.1521/soco.2014.32.3.239

Yan, J., Wei, J., Zhao, D., Vinnikova, A., Li, L., & Wang, S. (2018). Communicating online diet-nutrition information and influencing health behavioral intention: The role of risk perceptions, problem recognition, and situational motivation. Journal of Health Communication, 23 (7), 624–633. https://doi.org/10.1080/10810730.2018.1500657

Zhang, J., Xie, C., Lin, Z., & Huang, S. (2023). Effects of risk messages on tourists’ travel intention: Does distance matter? Journal of Hospitality and Tourism Management, 55 , 169–184. https://doi.org/10.1016/j.jhtm.2023.03.020

Zhao, S., Ye, B., Wang, W., & Zeng, Y. (2022). The intolerance of uncertainty and “untact” buying behavior: The mediating role of the perceived risk of COVID-19 variants and protection motivation. Frontiers in Psychology, 13 , 807331. https://doi.org/10.3389/fpsyg.2022.807331

Zhonglin, W., & Baojuan, Y. (2014). Analyses of mediating effects: The development of methods and models. Advances in Psychological Science, 22 (5), 731–745. https://doi.org/10.3724/sp.J.1042.2014.00731

Zhou, P., Zhao, Y., Xiao, S., & Zhao, K. (2022). The impact of online health community engagement on lifestyle changes: A serially mediated model. Frontiers in Public Health, 10 . https://doi.org/10.3389/fpubh.2022.987331

Zhou, J. J., & Fan, T. T. (2019). Understanding the factors influencing patient e-health literacy in Online Health Communities (OHCs): A social cognitive theory perspective. International Journal of Environmental Research and Public Health, 16 (14), 2455. https://doi.org/10.3390/ijerph16142455

Zhu, Z., Zhao, Y., & Wang, J. (2022). The impact of destination online review content characteristics on travel intention: Experiments based on psychological distance perspectives. Aslib Journal of Information Management . https://doi.org/10.1108/ajim-06-2022-0293

Zou, X., Chen, Q., Zhang, Y. Y., & Evans, R. (2023). Predicting COVID-19 vaccination intentions: The roles of threat appraisal, coping appraisal, subjective norms, and negative affect. BMC Public Health, 23 (1), 230. https://doi.org/10.1186/s12889-023-15169-x

Download references

This research received no external funding.

Author information

Authors and affiliations.

School of Management, Shanghai University, Shanghai, 201800, China

Jingfang Liu & Shiqi Wang

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, J.L. and S.W.; methodology, J.L. and S.W.; software, S.W.; validation, S.W.; formal analysis and S.W.; investigation, S.W.; resources, S.W.; data curation, S.W.; writing—original draft preparation, S.W.; writing—review and editing, S.W.; visualization, S.W. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Shiqi Wang .

Ethics declarations

Ethical approval.

This project received ethical approval from the Ethics Committee of Shanghai University, with ethics approval number ECSHU 2023–072.

Informed consent

Not applicable.

Conflicts of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Liu, J., Wang, S. A study of the effect of viewing online health popular science information on users' willingness to change health behaviors – based on the psychological distance perspective. Curr Psychol (2024). https://doi.org/10.1007/s12144-024-06582-5

Download citation

Accepted : 15 August 2024

Published : 05 September 2024

DOI : https://doi.org/10.1007/s12144-024-06582-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Willingness to change health behavior
  • Psychological distance
  • Find a journal
  • Publish with us
  • Track your research

Search Begins for Georgia Tech’s Executive Vice President for Research

President Ángel Cabrera has convened a search committee, chaired by College of Sciences Dean Susan Lozier, charged with selecting Georgia Tech’s next executive vice president for research (EVPR). To assist with the process, the Institute has retained the services of executive search firm WittKieffer. 

“I thank all the members of the search committee and committee chair Dean Lozier, for conducting a thorough search to identify our next executive vice president for research,” said President Cabrera. “As one of the nation’s foremost academic research institutions, Georgia Tech is looking for a leader who can sustain the growth of our research enterprise, build the infrastructure necessary to support it, and deliver on our mission to advance technology and improve the human condition.” 

WittKieffer will host several town halls to gather input from the Georgia Tech community on the preferred qualifications of the next EVPR.  

Community Engagement Schedule   Georgia Tech Staff Town Hall  Tuesday, September 10 at 10:00 a.m.  Hybrid: Marcus Nanotechnology Building, 345 Ferst Drive, Room 1116 / Register for virtual attendance) 

GTRI Town Hall   Tuesday, September 10 at 12:00 p.m.  Virtual only (Details forthcoming for GTRI faculty and staff) 

Georgia Tech Faculty Town Hall  Tuesday, September 10 at 2:00 p.m.  Hybrid: Howey Physics L3 Classroom / ( Register for virtual attendance) 

Open Georgia Tech and GTRI Town Hall  Wednesday, September 11 at 11:00 a.m.  Virtual only ( Register online ) 

Additional Information   Both internal and external candidates are invited to apply. For more details, including the position description and the application process, a list of the search committee members, and the key dates, visit the EVPR search webpage .   Regents’ Professor Tim Lieuwen has been appointed interim EVPR and will serve until the new EVPR is in place. 

hypotheses user research

Shelley Wunder-Smith Director of Research Communications [email protected]

IMAGES

  1. How to Set Up a User Research Framework (And Why Your Team Needs One

    hypotheses user research

  2. Research Hypothesis: Definition, Types, Examples and Quick Tips

    hypotheses user research

  3. Research Hypothesis: Definition, Types, Examples and Quick Tips

    hypotheses user research

  4. What user research is and why you should do it

    hypotheses user research

  5. What is user research and why is it useful?

    hypotheses user research

  6. How to Do Strong Research Hypothesis

    hypotheses user research

VIDEO

  1. How to Take Hypotheses in Scientific Research

  2. The Power of Hypotheses in Research!

  3. Research questions and hypotheses (quick remarks)+ Should all theses have a methodology chapter?

  4. Research questions and hypotheses

  5. Mastering Theoretical Frameworks & Hypothesis Development

  6. What, When, Why: Research Goals, Questions, and Hypotheses

COMMENTS

  1. User Research

    If you generate hypotheses for your user-research; you can test them at the relevant stage of research. The benefits include: Articulating a hypothesis makes it easy for your team to be sure that you're testing the right thing. Articulating a hypothesis often guides us to a quick solution as to how to test that hypothesis.

  2. How to Create a Research Hypothesis for UX: Step-by-Step

    Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development. 1. Formulate your hypothesis. Start by writing out your hypothesis in a way that's specific and relevant to a distinct aspect of your user or product experience.

  3. How to conduct user research: A step-by-step guide

    Step #1: Define research objectives. Before you get in touch with your target users, you need to define why you are doing the research in the first place. Establish clear objectives and agree with your team on your exact goals - this will make it much easier to gain valuable insights.

  4. Hypotheses in user research and discovery

    The unit of measurement is user research. As this is about research and learning (discovery), the measure is simply what we want to learn from user research. Each assumption can become testable ...

  5. Understanding Your Users: A Practical Guide to User Research Methods

    This new and completely updated edition is a comprehensive, easy-to-read, "how-to" guide on user research methods. You'll learn about many distinct user research methods and also pre- and post-method considerations such as recruiting, facilitating activities or moderating, negotiating with product developments teams/customers, and getting your results incorporated into the product.

  6. 6 Powerful User Research Methods to Boost Hypothesis Validation

    Photo by UX Indonesia on Unsplash 1. Card Sorting. Card Sorting is a user research method where participants are requested to group content and features into open or closed categories. The outcome of this exercise unveils valuable patterns that reflect users' expectations regarding the content organization, offering insights for refining navigation, menus, and categories.

  7. UX Research: Objectives, Assumptions, and Hypothesis

    Research objectives. One of the biggest problem with using hypotheses is that they set the wrong expectations about what your research results are telling you. In Thinking, Fast and Slow, Daniel Kahneman points out that: "extreme outcomes (both high and low) are more likely to be found in small than in large samples".

  8. Hypothesis Testing in the User Experience

    Hypothesis testing is at the heart of modern statistical thinking and a core part of the Lean methodology. Instead of approaching design decisions with pure instinct and arguments in conference rooms, form a testable statement, invite users, define metrics, collect data and draw a conclusion. Does requiring the user to double enter an email ...

  9. 5 rules for creating a good research hypothesis

    2: Question: Consider which questions you want to answer. 3: Hypothesis: Write your research hypothesis. 4: Goal: State one or two SMART goals for your project (specific, measurable, achievable, relevant, time-bound). 5: Objective: Draft a measurable objective that aligns directly with each goal. In this article, we will focus on writing your ...

  10. The Complete Guide To UX Research (User Research)

    Apply your chosen user research methods to your Hypotheses and Objectives! The various techniques used by the senior product designer in the BTNG Design Process can definitely be overwhelming. The product development process is not a straight line from A to B. UX Researchers often discover new qualitative insights in the user experience due to ...

  11. The User Research Process: A 7-Step Framework

    Here it is, the UX research process in 7 (ish) steps: Step 1. Identify your research goals. This is the first and most important step in any user research study. Without clear goals and objectives, you're just fumbling in the dark. And that's no way to conduct user research.

  12. How to write effective UX research questions (with examples)

    WattBuy Director of Design. Open-ended research questions aim to discover more about research participants and gather candid user insights, rather than seeking specific answers. Some examples of UX research that use open-ended questions include: Usability testing. Diary studies. Persona research. Use case research.

  13. UX Research Cheat Sheet

    UX Research Cheat Sheet. Susan Farrell. February 12, 2017. Summary: User research can be done at any point in the design cycle. This list of methods and activities can help you decide which to use when. User-experience research methods are great at producing data and insights, while ongoing activities help get the right things done.

  14. A 5-Step Process For Conducting User Research

    5. Synthesis: Answer Our Research Questions, And Prove Or Disprove Our Hypotheses #. Now that you've gathered research data, it's time to capture the knowledge required to answer your research questions and to advance your design goals. "In synthesis, you're trying to find meaning in your data," says Serota.

  15. How to Write a Strong Hypothesis

    The specific group being studied. The predicted outcome of the experiment or analysis. 5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

  16. Research Hypothesis: Definition, Types, Examples and Quick Tips

    3. Simple hypothesis. A simple hypothesis is a statement made to reflect the relation between exactly two variables. One independent and one dependent. Consider the example, "Smoking is a prominent cause of lung cancer." The dependent variable, lung cancer, is dependent on the independent variable, smoking. 4.

  17. How to Synthesize User Research Data in 14 Steps

    Explain the context. Use tags and supporting data to help your audience understand your research. Group, search and share your Insights - Grouping insights provides a system to quickly search and share relevant user research insights with stakeholders. 5. Run Brainstorming Sessions.

  18. How to conduct user research

    Aug 14 How to conduct user research — when you already have a hypothesis. Andrew Dupree. strategy, design. A while back I wrote about design thinking, the critical third pillar of any great hardware development effort. I wanted to emphasize that you need more than engineering and business skills to make a great product and company. You need ...

  19. What is a Research Hypothesis: How to Write it, Types, and Examples

    It seeks to explore and understand a particular aspect of the research subject. In contrast, a research hypothesis is a specific statement or prediction that suggests an expected relationship between variables. It is formulated based on existing knowledge or theories and guides the research design and data analysis. 7.

  20. How to Write a Strong Hypothesis

    Step 5: Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  21. Scientific Hypotheses: Writing, Promoting, and Predicting Implications

    In fact, publicly discussing research questions on platforms of news outlets, such as Reddit, may shape hypotheses on health-related issues of global importance, such as obesity.40 Analyzing Twitter comments, researchers may reveal both potentially valuable ideas and unfounded claims that surround groundbreaking research ideas.41 Social media ...

  22. How to avoid sinking in swamp: exploring the intentions of digitally

    During the research process, we focused on the attitudes, willingness, and other behavioural characteristics of digitally disadvantaged groups in relation to mixed physical and virtual spaces ...

  23. A study of the effect of viewing online health popular science

    The data of the study were obtained by the research method of questionnaire survey and the proposed hypotheses were validated using Smartpls software. 87.28% of the respondents in this study's survey sample were aged 18-40 years old, and people in this age group have higher pressure from study and work, and live a fast-paced life with less ...

  24. Full article: Gamification applied with the SOAR study method for time

    The research proposes the implementation of a mobile application to improve time management based on gamification as a solution to promote compliance with academic activities. ... demonstrating that the hypotheses increase the number of hours dedicated to the dimensions of the SOAR study method. ... We proceeded with the implementation of user ...

  25. Search Begins for Georgia Tech's Executive Vice President for Research

    President Ángel Cabrera has convened a search committee, chaired by College of Sciences Dean Susan Lozier, charged with selecting Georgia Tech's next executive vice president for research (EVPR). To assist with the process, the Institute has retained the services of executive search firm WittKieffer.