Search by keyword in the Instrumentation field and/or limit to the Publication Type: Tests/Questionnaires; OR search by keyword in the Instrumentation Index (click on More at the navigation bar at the top of the page and select Indexes)
Search by instrument name and additional subject headings such as (MH "Reliability and Validity+"); OR limit search results by selecting Publication Types: Research Instrument Utilization or Research Instrument Validation
largest database of education literature in the world; includes a variety of document types including journal articles, research reports, curriculum and teaching guides, conference papers, and books
Search by instrument name or acronym and limit to Publication Type: Tests/Questionnaires [articles where an instrument is included]
Search by keyword and limit results to Document type: Test/Questionnaires
Search by instrument name and additional keyword or thesaurus terms such as test validity, content validity, construct validity, interrater reliability, test reliability, test reviews, etc. in the Descriptor field
database of descriptive information for over 20,000 tests from varying fields with an emphasis on education
Search by instrument name or by variable. Check the appendices of dissertations for the full-text instruments used.
Home » Questionnaire – Definition, Types, and Examples
Table of Contents
Definition:
A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.
It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective. The questions may be open-ended or closed-ended, and the responses can be quantitative or qualitative. Questionnaires are widely used in research, marketing, social sciences, healthcare, and many other fields to collect data and insights from a target population.
The history of questionnaires can be traced back to the ancient Greeks, who used questionnaires as a means of assessing public opinion. However, the modern history of questionnaires began in the late 19th century with the rise of social surveys.
The first social survey was conducted in the United States in 1874 by Francis A. Walker, who used a questionnaire to collect data on labor conditions. In the early 20th century, questionnaires became a popular tool for conducting social research, particularly in the fields of sociology and psychology.
One of the most influential figures in the development of the questionnaire was the psychologist Raymond Cattell, who in the 1940s and 1950s developed the personality questionnaire, a standardized instrument for measuring personality traits. Cattell’s work helped establish the questionnaire as a key tool in personality research.
In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.
Today, questionnaires are used in a wide range of settings, including academic research, business, healthcare, and government. They continue to evolve as a research tool, with advances in computer technology and data analysis techniques making it easier to collect and analyze data from large numbers of participants.
Types of Questionnaires are as follows:
This type of questionnaire has a fixed format with predetermined questions that the respondent must answer. The questions are usually closed-ended, which means that the respondent must select a response from a list of options.
An unstructured questionnaire does not have a fixed format or predetermined questions. Instead, the interviewer or researcher can ask open-ended questions to the respondent and let them provide their own answers.
An open-ended questionnaire allows the respondent to answer the question in their own words, without any pre-determined response options. The questions usually start with phrases like “how,” “why,” or “what,” and encourage the respondent to provide more detailed and personalized answers.
In a closed-ended questionnaire, the respondent is given a set of predetermined response options to choose from. This type of questionnaire is easier to analyze and summarize, but may not provide as much insight into the respondent’s opinions or attitudes.
A mixed questionnaire is a combination of open-ended and closed-ended questions. This type of questionnaire allows for more flexibility in terms of the questions that can be asked, and can provide both quantitative and qualitative data.
In a pictorial questionnaire, instead of using words to ask questions, the questions are presented in the form of pictures, diagrams or images. This can be particularly useful for respondents who have low literacy skills, or for situations where language barriers exist. Pictorial questionnaires can also be useful in cross-cultural research where respondents may come from different language backgrounds.
The types of Questions in Questionnaire are as follows:
These questions have several options for participants to choose from. They are useful for getting quantitative data and can be used to collect demographic information.
These questions ask participants to rate something on a scale (e.g. from 1 to 10). They are useful for measuring attitudes and opinions.
These questions allow participants to answer in their own words and provide more in-depth and detailed responses. They are useful for getting qualitative data.
These questions ask participants to rate how much they agree or disagree with a statement. They are useful for measuring attitudes and opinions.
How strongly do you agree or disagree with the following statement:
“I enjoy exercising regularly.”
These questions ask about the participant’s personal information such as age, gender, ethnicity, education level, etc. They are useful for segmenting the data and analyzing results by demographic groups.
These questions only have two options: Yes or No. They are useful for getting simple, straightforward answers to a specific question.
Have you ever traveled outside of your home country?
These questions ask participants to rank several items in order of preference or importance. They are useful for measuring priorities or preferences.
Please rank the following factors in order of importance when choosing a restaurant:
These questions present a matrix or grid of options that participants can choose from. They are useful for getting data on multiple variables at once.
The product is easy to use | ||||
The product meets my needs | ||||
The product is affordable |
These questions present two options that are opposite or contradictory. They are useful for measuring binary or polarized attitudes.
Do you support the death penalty?
Step-by-Step Guide for Making a Questionnaire:
There are several modes of questionnaire administration. The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:
Title of the Survey: Customer Satisfaction Survey
Introduction:
We appreciate your business and would like to ensure that we are meeting your needs. Please take a few minutes to complete this survey so that we can better understand your experience with our products and services. Your feedback is important to us and will help us improve our offerings.
Instructions:
Please read each question carefully and select the response that best reflects your experience. If you have any additional comments or suggestions, please feel free to include them in the space provided at the end of the survey.
1. How satisfied are you with our product quality?
2. How satisfied are you with our customer service?
3. How satisfied are you with the price of our products?
4. How likely are you to recommend our products to others?
5. How easy was it to find the information you were looking for on our website?
6. How satisfied are you with the overall experience of using our products and services?
7. Is there anything that you would like to see us improve upon or change in the future?
…………………………………………………………………………………………………………………………..
Conclusion:
Thank you for taking the time to complete this survey. Your feedback is valuable to us and will help us improve our products and services. If you have any further comments or concerns, please do not hesitate to contact us.
Some common applications of questionnaires include:
Some common purposes of questionnaires include:
Here are some situations when questionnaires might be used:
Here are some of the characteristics of questionnaires:
Some Advantage of Questionnaire are as follows:
Limitations of Questionnaire are as follows:
Researcher, Academic Writer, Web developer
What are Research Instruments?
Research instruments can be tests , surveys , scales , questionnaires , or even checklists .
To assure the strength of your study, it is important to use previously validated instruments!
Getting Started
Already know the full name of the instrument you're looking for?
Finding a research instrument can be very time-consuming!
This process involves three concrete steps:
It is common that sources will not provide the full instrument, but they will provide a citation with the publisher. In some cases, you may have to contact the publisher to obtain the full text.
Research Tip : Talk to your departmental faculty. Many of them have expertise in working with research instruments and can help you with this process.
There are various types of surveys you can choose from. Basically, the types of surveys are broadly categorized into two: according to instrumentation and according to the span of time involved. The types of surveys according to instrumentation include the questionnaire and the interview. On the other hand, the types of surveys according to the span of time used to conduct the survey are comprised of cross-sectional surveys and longitudinal surveys.
In survey research, the instruments that are utilized can be either a questionnaire or an interview (either structured or unstructured).
Typically, a questionnaire is a paper-and-pencil instrument that is administered to the respondents. The usual questions found in questionnaires are closed-ended questions, which are followed by response options. However, there are questionnaires that ask open-ended questions to explore the answers of the respondents.
Questionnaires have been developed over the years. Today, questionnaires are utilized in various survey methods , according to how they are given. These methods include the self-administered, the group-administered, and the household drop-off. Among the three, the self-administered survey method is often used by researchers nowadays. The self-administered questionnaires are widely known as the mail survey method. However, since the response rates related to mail surveys had gone low, questionnaires are now commonly administered online, as in the form of web surveys.
Between the two broad types of surveys, interviews are more personal and probing. Questionnaires do not provide the freedom to ask follow-up questions to explore the answers of the respondents, but interviews do.
An interview includes two persons - the researcher as the interviewer, and the respondent as the interviewee. There are several survey methods that utilize interviews. These are the personal or face-to-face interview, the phone interview , and more recently, the online interview .
The span of time needed to complete the survey brings us to the two different types of surveys: cross-sectional and longitudinal.
Collecting information from the respondents at a single period in time uses the cross-sectional type of survey. Cross-sectional surveys usually utilize questionnaires to ask about a particular topic at one point in time. For instance, a researcher conducted a cross-sectional survey asking teenagers’ views on cigarette smoking as of May 2010. Sometimes, cross-sectional surveys are used to identify the relationship between two variables , as in a comparative study. An example of this is administering a cross-sectional survey about the relationship of peer pressure and cigarette smoking among teenagers as of May 2010.
When the researcher attempts to gather information over a period of time or from one point in time up to another, he is doing a longitudinal survey. The aim of longitudinal surveys is to collect data and examine the changes in the data gathered. Longitudinal surveys are used in cohort studies , panel studies and trend studies.
Sarah Mae Sincero (Sep 21, 2012). Types of Survey. Retrieved Sep 19, 2024 from Explorable.com: https://explorable.com/types-of-survey
The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .
This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.
That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).
Save this course for later.
Don't have time for it all now? No problem, save it as a course and come back to it later.
Survey research is a method in which data is collected from a target population, called the sample, by personal interviews, online surveys, the telephone, or paper questionnaires. Some forms of survey research such as online surveys may be completed in an automated fashion. The professionals at Statistics Solutions provide survey administration help to master’s and doctoral candidates in the survey administration phase of their research. The choice of survey instrument(s) used to gather data for your thesis or dissertation is critical. If you are planning to create your own survey instrument and administer it online (e.g., SurveyMonkey, QuestionPro, PsychData or Zoomerang), Statistics Solutions can help you create the survey questions and any subscales so they can be easily analyzed and answer your research questions. Our consultants can then help you validate your instrument and expedite the IRB approval process by helping you avoid the typical university and committee pitfalls. If you are using an established instrument , our statistical consultants will help you understand the validity and reliability information and the statistical analysis appropriate for the instrument constructs. Our statistical consultants will then help you integrate this information into your dissertation.
Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.
Key Terms and Concepts: Survey instrument: The questionnaire or response item posed to a respondent is called a survey research instrument . The instrument may be a questionnaire or an interview; it depends on the survey research. Interviews and questionnaires: An interview uses face-to-face interaction, whereas a questionnaire uses the mail and other indirect methods of taking responses from a respondent. Response structure: In survey research, the response structure is the format of the item. Structures may be open-ended, close-ended, multi-response, dichotomous, a ranking system, or a variety of other formats. Survey error: In survey research, survey errors includes factors such as the selection of the wrong sample, the wrong coding in a questionnaire, a tabulating error, data processing errors, interviewer bias, researcher bias, and misinterpretation of data. Pretesting: Pretesting refers to all the essential steps involved in survey research before selecting the final sample. According to Converse and Presser (1986: 65), two pretests should be conducted before selecting the final sample . Analysis of non-response: In survey research, some respondents do not fill out the entire questionnaire. The unanswered questions in this case become the missing values . We should exclude those values during the analysis or we should fill those missing values by using missing value analysis.
Data Collection Methods: Face-to-face interview: In survey research, this is the most expensive but reliable method for data collection. In face-to-face interviews, most of the respondents give complete and accurate answers. This method is used when the research requires deep exploration of opinion. Mail Survey: This method uses the Internet or sends mail to the respondents. There is no bias on the part of the interviewer in this method, but there is no control over respondent interaction. Telephone: This method is a fast method of data collection in survey research. This method supports open-ended responses and moderate control over interviewer bias. Web survey: This is a less expensive method and it is also the fastest method of data collection. This method is appropriate when we need data from a large population or when we need international data. This method is more suitable when we need unscientific but quick responses.
Survey Design Considerations: Survey layout: For Internet surveys or mail surveys, the layout of the survey should be attractive and easy to use; for example, the survey should avoid multiple fonts, the response area should be on the right side, there should be a clear separation of questions, and the survey should be an attractive color. Survey length: In survey research, the length of the survey should be as long as needed within the constraint of the respondent’s attention span. The surveys need to have a minimum of three items for testing a particular hypothesis.
Item bias in survey research: Ambiguity: Questions should be specific. We should avoid questions that make the respondent uncomfortable in giving the answer to that particular question. Rank lists: Respondents should not be asked to rank more than four or five items. Beyond that, respondents may give an arbitrary ranking just to get past the item. Unfamiliar terms and jargon: In survey research, we should not use unfamiliar words. Respondents must be able to answer the questions easily, and they cannot do this if the survey uses unfamiliar words or jargon. Poor grammatical format: In survey research, weak grammatical format can introduce bias. We should avoid poor grammatical format. Hypothetical items: We should not include hypothetical items. Hypothetical items make it difficult for the respondent to answer that particular question. Language differences: Items must have the same meaning when the questionnaire is given to populations speaking different languages. Types of items: Model items are those that measure variables in the survey model. Filter items: In survey research, filter items are those items which eliminate the unqualified respondents during post processing. Cross-check items: In survey research, cross-check items are those items which are used for consistency with the respondent. For example, at one place one can ask for the age of the respondent, and at another place, one can ask the data for the respondent’s birth. This will yield consistency of data.
Survey Administration Help Resources
Diment, K., & Garrett-Jones, S. (2007). How demographic characteristics affect mode preference in a postal/web mixed-mode survey of Australian researchers. Social Science Computer Review, 25 (3), 410-417.
Ehrlich, H. J. (1969). Attitudes, behavior, and the intervening variables. American Sociologist, 4 (1), 29-34.
Göritz , A. S. (2006). Cash lotteries as incentives in online panels. Social Science Computer Review, 24 (4), 445-459. Göritz, A. S., & Wolff, H. -G. (2007). Lotteries as incentives in longitudinal web studies. Social Science Computer Review, 25 (1), 99-110.
Groves, R. M., Cialdini, R. B., & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56 (4), 475-495.
Healey, B. (2007). Drop downs and scroll mice: The effect of response option format and input mechanism employed on data quality in web surveys. Social Science Computer Review, 25 (1), 111-128.
Lee, S. (2006). An evaluation of nonresponse and coverage errors in a prerecruited probability web panel survey. Social Science Computer Review, 24 (4), 460-475.
Related Pages:
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written interview . They can be carried out face to face, by telephone, computer, or post.
Questionnaires provide a relatively cheap, quick, and efficient way of obtaining large amounts of information from a large sample of people.
Data can be collected relatively quickly because the researcher would not need to be present when completing the questionnaires. This is useful for large populations when interviews would be impractical.
However, a problem with questionnaires is that respondents may lie due to social desirability. Most people want to present a positive image of themselves, and may lie or bend the truth to look good, e.g., pupils exaggerate revision duration.
Questionnaires can effectively measure relatively large subjects’ behavior, attitudes, preferences, opinions, and intentions more cheaply and quickly than other methods.
Often, a questionnaire uses both open and closed questions to collect data. This is beneficial as it means both quantitative and qualitative data can be obtained.
A closed-ended question requires a specific, limited response, often “yes” or “no” or a choice that fit into pre-decided categories.
Data that can be placed into a category is called nominal data. The category can be restricted to as few as two options, i.e., dichotomous (e.g., “yes” or “no,” “male” or “female”), or include quite complex lists of alternatives from which the respondent can choose (e.g., polytomous).
Closed questions can also provide ordinal data (which can be ranked). This often involves using a continuous rating scale to measure the strength of attitudes or emotions.
For example, strongly agree / agree / neutral / disagree / strongly disagree / unable to answer.
Closed questions have been used to research type A personality (e.g., Friedman & Rosenman, 1974) and also to assess life events that may cause stress (Holmes & Rahe, 1967) and attachment (Fraley, Waller, & Brennan, 2000).
Open questions allow for expansive, varied answers without preset options or limitations.
Open questions allow people to express what they think in their own words. Open-ended questions enable the respondent to answer in as much detail as they like in their own words. For example: “can you tell me how happy you feel right now?”
Open questions will work better if you want to gather more in-depth answers from your respondents. These give no pre-set answer options and instead, allow the respondents to put down exactly what they like in their own words.
Open questions are often used for complex questions that cannot be answered in a few simple categories but require more detail and discussion.
Lawrence Kohlberg presented his participants with moral dilemmas. One of the most famous concerns a character called Heinz, who is faced with the choice between watching his wife die of cancer or stealing the only drug that could help her.
Participants were asked whether Heinz should steal the drug or not and, more importantly, for their reasons why upholding or breaking the law is right.
With some questionnaires suffering from a response rate as low as 5%, a questionnaire must be well designed.
There are several important factors in questionnaire design.
Question order.
Questions should progress logically from the least sensitive to the most sensitive, from the factual and behavioral to the cognitive, and from the more general to the more specific.
The researcher should ensure that previous questions do not influence the answer to a question.
Ethical issues.
At first sight, the postal questionnaire seems to offer the opportunity to get around the problem of interview bias by reducing the personal involvement of the researcher. Its other practical advantages are that it is cheaper than face-to-face interviews and can quickly contact many respondents scattered over a wide area.
However, these advantages must be weighed against the practical problems of conducting research by post. A lack of involvement by the researcher means there is little control over the information-gathering process.
The data might not be valid (i.e., truthful) as we can never be sure that the questionnaire was completed by the person to whom it was addressed.
That, of course, assumes there is a reply in the first place, and one of the most intractable problems of mailed questionnaires is a low response rate. This diminishes the reliability of the data
A pilot study is a practice / small-scale study conducted before the main study.
It allows the researcher to try out the study with a few participants so that adjustments can be made before the main study, saving time and money.
How do psychological researchers analyze the data collected from questionnaires.
Psychological researchers analyze questionnaire data by looking for patterns and trends in people’s responses. They use numbers and charts to summarize the information.
They calculate things like averages and percentages to see what most people think or feel. They also compare different groups to see if there are any differences between them.
By doing these analyses, researchers can understand how people think, feel, and behave. This helps them make conclusions and learn more about how our minds work.
Yes, questionnaires can be effective in gathering accurate data. When designed well, with clear and understandable questions, they allow individuals to express their thoughts, opinions, and experiences.
However, the accuracy of the data depends on factors such as the honesty and accuracy of respondents’ answers, their understanding of the questions, and their willingness to provide accurate information. Researchers strive to create reliable and valid questionnaires to minimize biases and errors.
It’s important to remember that while questionnaires can provide valuable insights, they are just one tool among many used in psychological research.
Yes, questionnaires can be used with diverse populations and cultural contexts. Researchers take special care to ensure that questionnaires are culturally sensitive and appropriate for different groups.
This means adapting the language, examples, and concepts to match the cultural context. By doing so, questionnaires can capture the unique perspectives and experiences of individuals from various backgrounds.
This helps researchers gain a more comprehensive understanding of human behavior and ensures that everyone’s voice is heard and represented in psychological research.
No, questionnaires are not the only method used in psychological research. Psychologists use a variety of research methods, including interviews, observations , experiments , and psychological tests.
Each method has its strengths and limitations, and researchers choose the most appropriate method based on their research question and goals.
Questionnaires are valuable for gathering self-report data, but other methods allow researchers to directly observe behavior, study interactions, or manipulate variables to test hypotheses.
By using multiple methods, psychologists can gain a more comprehensive understanding of human behavior and mental processes.
The semantic differential scale is a questionnaire format used to gather data on individuals’ attitudes or perceptions. It’s commonly incorporated into larger surveys or questionnaires to assess subjective qualities or feelings about a specific topic, product, or concept by quantifying them on a scale between two bipolar adjectives.
It presents respondents with a pair of opposite adjectives (e.g., “happy” vs. “sad”) and asks them to mark their position on a scale between them, capturing the intensity of their feelings about a particular subject.
It quantifies subjective qualities, turning them into data that can be statistically analyzed.
Ayidiya, S. A., & McClendon, M. J. (1990). Response effects in mail surveys. Public Opinion Quarterly, 54 (2), 229–247. https://doi.org/10.1086/269200
Fraley, R. C., Waller, N. G., & Brennan, K. A. (2000). An item-response theory analysis of self-report measures of adult attachment. Journal of Personality and Social Psychology, 78, 350-365.
Friedman, M., & Rosenman, R. H. (1974). Type A behavior and your heart . New York: Knopf.
Gold, R. S., & Barclay, A. (2006). Order of question presentation and correlation between judgments of comparative and own risk. Psychological Reports, 99 (3), 794–798. https://doi.org/10.2466/PR0.99.3.794-798
Holmes, T. H., & Rahe, R. H. (1967). The social readjustment rating scale. Journal of psychosomatic research, 11(2) , 213-218.
Schwarz, N., & Hippler, H.-J. (1995). Subsequent questions may influence answers to preceding questions in mail surveys. Public Opinion Quarterly, 59 (1), 93–97. https://doi.org/10.1086/269460
Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis . Cambridge University Press.
A research instrument is a survey, questionnaire, test, scale, rating, or tool designed to measure the variable(s), characteristic(s), or information of interest, often a behavioral or psychological characteristic. Research instruments can be helpful tools to your research study.
"Careful planning for data collection can help with setting realistic goals. Data collection instrumentation, such as surveys, physiologic measures (blood pressure or temperature), or interview guides, must be identified and described. Using previously validated collection instruments can save time and increase the study's credibility. Once the data collection procedure has been determined, a time line for completion should be established." (Pierce, 2009, p. 159)
A research instrument is developed as a method of data generation by researchers and information about the research instrument is shared in order to establish the credibility and validity of the method. Whether other researchers may use the research instrument is the decision of the original author-researchers. They may make it publicly available for free or for a price or they may not share it at all. Sources about research instruments have a purpose of describing the instrument to inform. Sources may or may not provide the instrument itself or the contact information of the author-researcher. The onus is on the reader-researcher to try to find the instrument itself or to contact the author-researcher to request permission for its use, if necessary.
Are you trying to find background information about a research instrument? Or are you trying to find and obtain an actual copy of the instrument?
If you need information about a research instrument, what kind of information do you need? Do you need information on the structure of the instrument, its content, its development, its psychometric reliability or validity? What do you need?
If you plan to obtain an actual copy of the instrument to use in research, you need to be concerned not only with obtaining the instrument, but also obtaining permission to use the instrument. Research instruments may be copyrighted. To obtain permission, contact the copyright holder in writing (print or email).
If someone posts a published test or instrument without the permission of the copyright holder, they may be violating copyright and could be legally liable.
What are you trying to measure? For example, if you are studying depression, are you trying to measure the duration of depression, the intensity of depression, the change over time of the episodes, … what? The instrument must measure what you need or it is useless to you.
Factors to consider when selecting an instrument are • Well-tested factorial structure, validity & reliability • Availability of supportive materials and technology for entering, analyzing and interpreting results • Availability of normative data as a reference for evaluating, interpreting, or placing in context individual test scores • Applicable to wide range of participants • Can also be used as personal development tool/exercise • User-friendliness & administrative ease • Availability; can you obtain it? • Does it require permission from the owner to use it? • Financial cost • Amount of time required
Check the validity and reliability of tests and instruments. Do they really measure what they claim to measure? Do they measure consistently over time, with different research subjects and ethnic groups, and after repeated use? Research articles that used the test will often include reliability and validity data.
Realize that searching for an instrument may take a lot of time. They may be published in a book or article on a particular subject. They be published and described in a dissertation. They may posted on the Internet and freely available. A specific instrument may be found in multiple publications and have been used for a long time. Or it may be new and only described in a few places. It may only be available by contacting the person who developed it, who may or may not respond to your inquiry in a timely manner.
There are a variety of sources that may used to search for research instruments. They include books, databases, Internet search engines, Web sites, journal articles, and dissertations.
A few key sources and search tips are listed in this guide.
If you plan to obtain an actual copy of the instrument to use in research, you need to be concerned not only with obtaining the instrument, but also obtaining permission to use the instrument. Research instruments are copyrighted. To obtain permission, contact the copyright holder to obtain permission in writing (print or email). Written permission is a record that you obtained permission.
It is a good idea to have them state in wiritng that they are indeed the copyright holder and that they grant you permission to use the instrument. If you wish to publish the actual instrument in your paper, get permission for that, too. You may write about the instrument without obtaining permission. (But remember to cite it!)
If someone posts a published test or instrument without the permission of the copyright holder, they are violating copyright and could be legally liable.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
Siobhan o’connor.
1 National University of Ireland Galway, Ireland
This commentary summarizes the contemporary design and use of surveys or questionnaires in nursing science, particularly in light of recent reporting guidelines to standardize and improve the quality of survey studies in healthcare research. The benefits, risks, and limitations of these types of data collection tools are also briefly discussed.
The use of research questionnaires or surveys in nursing is a long standing tradition, dating back to the 1960s ( Logan, 1966 ) and 1970s ( Oberst, 1978 ), when the scientific discipline emerged. This type of tool enables nursing researchers to gather primary data from a specific population, whether it is patients, carers, nurses, or other stakeholders to address gaps in the existing evidence base of a particular clinical, pedagogical, or policy area. However, the recent creation of a checklist for reporting survey studies called CROSS: Consensus-Based Checklist for Reporting of Survey Studies, hints at problems in their design, development, administration, and reporting ( Sharma et al., 2021 ). This commentary discussion focuses on the process of developing, validating, and administering surveys in nursing research and some ways to strengthen this methodological approach.
Ideally, surveys should be constructed to gather the minimum amount of information from respondents to provide good quality data about a problem or phenomenon. Gathering large amounts of unnecessary data may complicate a survey, leading to a low response rate and weak findings. Therefore, time and expertise is needed when designing research surveys ( Robb & Shellenbarger, 2020 ). Firstly, existing evidence should be reviewed to identify if an existing survey could be utilized or refined. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN), and associated database and critical appraisal checklist, could be employed to examine the psychometric properties of an established tool and its methodological quality before use ( Mokkink et al., 2016 ). For instance, Charette et al. (2020) followed the COSMIN approach when conducting a systematic review of the psychometric properties of scales that assessed new nurses clinical competence.
If a new instrument needs to be developed, then reviewing relevant literature could help inform what should be measured for example, what people know, think, feel, or do, along with guiding the content of specific survey questions ( Polit & Beck, 2020 ). Other techniques can be employed to create questions including a Delphi study to gather expert opinion ( Bender et al., 2018 ), or focus groups with patients, clinicians, educators, students, or policy makers ( Tajik et al., 2010 ). Decisions about the style of survey questions also needs consideration as each brings advantages and disadvantages. An open question gives a respondent free reign with their answer which could uncover fresh insights on a topic. However, it may contribute to respondent fatigue if too many are asked, and the data can be time consuming to analyze ( O’Cathain & Thomas, 2004 ).
How each question is worded is also important to avoid leading, composite, or presumptive questions, ones that are vague, overly lengthy and complex, or include double negatives, jargon, or terminology unfamiliar to the reader, so that what is being asked and answered is clear and consistent. The sequence of questions should also be logical, opening with more general non-threatening questions, followed by more specific ones that can be grouped or filtered accordingly, and closing with socio-demographic variables and a thank you ( Boynton & Greenhalgh, 2004 ). Closed fixed choice questions can be formulated in a number of ways including checklists, frequency or Likert-type scales, Guttman or cumulative scales, Thurstone scales, and rankings, which vary in their content, structure, and layout, and require either a dichotomous or multiple-choice response. The sensitivity of any measurement scale is important to ensure it accurately represents the respondents answer and reduces the risk of bias ( Polit, 2014 ). Hence, piloting a draft survey with a small sample of intended respondents can help identify problems with ambiguous content, the format of questions, or confusing instructions or layouts.
The validity and reliability of a survey instrument should also be established to demonstrate the questions are worded appropriately and illicit accurate answers. Validity is about accuracy, in terms of how well a survey measures what it is supposed to. It can be assessed in three ways: (1) face or content validity, (2) construct validity, and (3) criterion validity ( Rattray & Jones, 2007 ). Content validity looks at comprehensiveness, and whether questions adequately measure all aspects of a subject matter. For example, Devriendt et al. (2012) examined the content validity of the Safety Attitudes Questionnaire through expert clinician review and using the content validity index and a modified kappa index. Construct validity focuses on whether the concept(s) that underpin the questions in a survey correspond with contemporary theory and scientific knowledge. For instance, McSherry et al. (2002) employed factor analysis to determine construct validity for a Spirituality and Spiritual Care Rating Scale. Numerous research studies are often required to evaluate and refine the construct validity of a survey instrument to ensure it is robust. Some go further and investigate both convergent and discriminate validity, the two sub-types of construct validity ( Hallett et al., 2018 ; Zhao et al., 2020 )
Criterion validity refers to how much the scores in a survey measure agree with a gold-standard that is considered an ideal measure of the constructs or variables. This approach is not always feasible, if there are no reliable measures for independent comparison ( Polit & Beck, 2020 ). It can be done in two ways, the first by calculating a correlation coefficient which tests the strength of the association (not agreement) between the results from a survey and an external independent measure. Secondly, sensitivity and specificity can be calculated, although there is usually a trade-off between the two ( Groves, 2009 ). Sensitivity measures the ability of a survey to detect all instances of its subject matter (true positives), while specificity measures the ability of a survey to discriminate all instances of its subject matter from those which are not related (true negatives). Both false negatives and false positive errors may occur, so the nature of the research and survey instrument should guide which type of error should be minimized as much as possible ( Dillman, 2014 ). The two types of criterion validity, concurrent and predictive validity, can also be measured. Concurrent validity compares survey questions or items to a related validated measure, both of which are assessed at the same time, whereas predictive validity compares survey items against some criterion measure at a later time ( Kim & Abraham, 2016 ). While validity testing can be time consuming, expensive, and require a significant amount of statistical expertise, it is a robust way to develop and improve the psychometric properties of surveys.
The other major concept used to evaluate the quality of surveys is reliability which focuses on the consistency of a survey and its items, to ensure it would give the same results if repeated under the same conditions ( Rattray & Jones, 2007 ). The three kinds of reliability testing are: (1) test retest, (2) inter-rater, and (3) internal consistency. Test retest looks at consistency of a measure across time and whether survey results from the same person were the same on at least two or more occasions. This can be measured using a number of statistical techniques such as the intraclass correlation coefficient and Wilcoxon signed rank test ( Lovén Wickman et al., 2019 ). Inter-rater reliability examines the consistency of a measure across raters or observers to determine if a person scores items in a survey in the same way multiple times. Cohen’s kappa ( Dancey et al., 2012 ) and the intra-class correlation coefficient ( Ryu et al., 2013 ) are common statistical measures for this. Finally, internal consistency is how consistently respondents’ answer items in a survey, if pairs of questions measuring the same concept are asked in different ways which can be calculated using Chronbach’s alpha ( Paans et al., 2010 ). Although reliability may be established and the survey results reproducible, this does not mean they are valid and may be incorrect unless the instruments’ validity is also determined.
Once a survey is designed, it then needs to be administered to the appropriate population, once ethical approval is granted. Self-completing surveys are the most common as they can quickly and easily be given to a large population using online, electronic or paper methods which are affordable options. An interview administered survey is an alternative, where questions are answered in the presence of a researcher, if sensitive topics need to be discussed, if vulnerable populations need to be reached, or if a survey is long and complex. A Cochrane review of interventions for administering postal and electronic questionnaires reported several strategies such as utilizing stamped addressed envelopes, financial incentives, and personalized communications, were effective in increasing the response rate ( Edwards et al., 2009 ), as low response rates can negatively impact the results of survey studies. After data is gathered, verified, cleaned, and anonymized, it needs to be coded using suitable analyses. Epi Info™ ( https://www.cdc.gov/epiinfo/index.html ) is a popular tool for entering and storing survey data before exporting it to a statistical analysis package ( Kebede et al., 2017 ).
Finally, surveys are frequently published in scientific nursing journals. However, Sharma et al. (2021) highlighted the substantial variability and inconsistency in how research surveys are reported which can weaken the evidence base on a topic. They emphasized that despite two guidelines for reporting non-web and web-based surveys, SURGE ( Grimshaw, 2014 ) and CHERRIES ( Eysenbach, 2004 ), improvements in the reporting of survey research have not materialized and these tools have limitations. Hence, a new comprehensive survey reporting guideline called CROSS was developed to enhance the transparency and replicability of survey based health research ( Sharma et al., 2021 ). This new guideline should be used in nursing to enable survey studies to be better designed, conducted, and reported. By undertaking rigorous, high-quality surveys, researchers can advance nursing science, strengthen the evidence base on which nurses practice, and help make a positive impact on patient care and health service delivery.
Siobhan O’Connor , BSc, CIMA CBA, BSc, RGN, PhD, is a Lecturer at the School of Nursing and Midwifery, National University of Ireland Galway, Ireland. She has a multidisciplinary background in both nursing and information systems. Hence, her research focuses on the design, implementation, and use of a range of technologies in healthcare.
Author Contributions: The sole author drafted and wrote the manuscript.
Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.
A .gov website belongs to an official government organization in the United States.
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Related Topics:
SANDS—Semi-Automated Non-response Detection for Surveys—is an open-access AI tool developed by the National Center for Health Statistics (NCHS). It helps researchers and survey administrators detect responses that may not or do not answer the question (non-responses) in open-ended survey text. The model helps human reviewers to quickly divide a large volume of text for manual review.
To use SANDS, follow the model card or the detailed instructions in the Getting Started section.
Before applying the model to real data, review these sections:
This model is a fine-tuned version of the supervised SimCSE BERT base uncased model . It was introduced at the American Association of Public Opinion Research (AAPOR) 2022 Annual Conference .
The model is uncased, so it treats important , Important , and ImPoRtAnT the same.
Parent Model: For more details about SimCSE, we encourage users to visit the SimCSE Github repository , and the base model on HuggingFace. The model was fine-tuned on 3,000 labeled, open-ended responses from the NCHS Research and Development Survey's RANDS during COVID 19 Rounds 1 and 2 surveys. The base SimCSE BERT model was trained on BookCorpus and English Wikipedia.
To use SANDS, first install python. Using a package manager, install pandas and the transformers module:
> pip install transformers pandas
Once you’ve installed the modules, the following code illustrates how to download the model, and parse a fixed set of responses:
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import pandas as pd
# Load the model model_location = "NCHS/SANDS" model = AutoModelForSequenceClassification.from_pretrained(model_location) tokenizer = AutoTokenizer.from_pretrained(model_location)
# Create example responses to test responses = [ "sdfsdfa", "idkkkkk", "Because you asked", "I am a cucumber", "My job went remote and I needed to take care of my kids", ]
# Run the model and compute a score for each response with torch.no_grad(): tokens = tokenizer(responses, padding=True, truncation=True, return_tensors="pt") output = model(**tokens) scores = torch.softmax(output.logits, dim=1).numpy()
# Display the scores in a table columns = ["Gibberish", "Uncertainty", "Refusal", "High-risk", "Valid"] df = pd.DataFrame(scores, columns=columns) df.index.name = "Response" print(df)
The code should output the following table
sdfsdfa | 0.998 | 0.000 | 0.000 | 0.000 | 0.000 |
idkkkkk | 0.002 | 0.995 | 0.001 | 0.001 | 0.001 |
Because you asked | 0.001 | 0.001 | 0.976 | 0.006 | 0.014 |
I am a cucumber | 0.001 | 0.001 | 0.002 | 0.797 | 0.178 |
My job went remote and I needed to take care of my kids | 0.000 | 0.000 | 0.000 | 0.000 | 1.000 |
This model is to be used on survey responses for data cleaning. When analyzing data, researchers can use SANDS to filter out non-responses. The model will return a score for a response in 5 different categories:
The model has been trained εspecifically to identify survey non-response when the survey respondent has given an open-ended response, but their answer does not address the question or provide meaningful insight. Examples of these types of responses include "meow," "ksdhfkshgk," or "idk."
The model was fine-tuned on 3,000 labeled, open-ended responses to web probes on questions relating to the COVID-19 pandemic. These responses were gathered from NCHS's Research and Development Survey .
Web probes are questions designed to draw out information about how respondents understand, think about, and respond to the questions that are being evaluated. They are different than traditional open-ended survey questions. The context of our labeled responses was limited in focus to both COVID-19 and health responses. Responses outside this scope may notice a drop in performance.
The model trained on responses from both web and phone-based open-ended probes. There may be limitations in model effectiveness with more traditional open-ended survey questions with responses provided in other mediums.
This model does not assess the factual accuracy of responses or filter out responses with different demographic biases. It was not trained to be factual of people or events, so using the model for such classification is out of scope for the abilities of the model.
We did not train the model to recognize non-response in any language other than English. Responses in languages other than English are out of scope and the model will perform poorly. Any correct classifications are a result of the base SimCSE or Bert Models.
To investigate if there were differences between demographic groups on sensitivity and specificity, we conducted two-tailed Z-tests across demographic groups. These included:
There were 4,813 responses to 3 probes. To control for family-wise error rate, we applied the Bonferroni correction to the alpha level (α < 0.00167).
There were statistically significant differences in specificity between education levels, mode, and White and Black respondents. There were no statistically significant differences in sensitivity.
Respondents with some college or less had lower specificity compared to those with more education (0.73 versus 0.80, p < 0.0001). Respondents who used a smartphone or computer to complete their survey had a higher specificity than those who completed the survey over the telephone (0.77 versus 0.70, p < 0.0001). Black respondents had a lower specificity than White respondents (0.65 versus 0.78, p < 0.0001). Effect sizes for education and mode were small (h = 0.17 and h = 0.16, respectively). The effect size for race was between small and medium (h = 0.28).
Because the model was fine-tuned from SimCSE, itself fine-tuned from BERT, it will reproduce all biases inherent in these base models. Due to tokenization, the model may incorrectly classify typos, especially in acronyms. For example: LGBTQ is valid, while LBGTQ is classified as gibberish.
Model and code are released as open source under the Creative Commons Universal Public Domain. That includes source files and code samples, if any, in the content. This means you can use the code, model, and content in this repository except for any official trademarks in your own projects.
Open-source projects are made available and contributed to under licenses that include terms that—for the protection of contributors—make clear that the projects are offered—
This model is no different. The open content license it is offered under includes such terms.
Homepage for the National Center for Health Statistics, the nation's provider of official health statistics
BMC Health Services Research volume 24 , Article number: 1092 ( 2024 ) Cite this article
Metrics details
The shift towards person-centred care has become integral in achieving high-quality healthcare, focusing on individual patient needs, preferences, and values. However, existing instruments for measuring person-centred practice often lack theoretical underpinnings and comprehensive assessment. The Person-centred Practice Inventory – Staff (PCPI-S) and the Person-centred Practice Inventory – Care (PCPI-C) were developed in English to measure clinicians’ and patients’ experience of person-centred practice. The aim of this study was to investigate the psychometric properties of the French version of the PCPI-S and PCPI-C.
A multi-centred cross-sectional study was conducted in six hospitals in French-speaking Switzerland. Construct validity of the PCPI-S and the PCPI-C was evaluated by using confirmatory factor analysis and McDonald’s Omega coefficient was used to determine the internal consistency.
A sample of 558 healthcare professionals and 510 patients participated in the surveys. Psychometric analyses revealed positive item scores and acceptable factor loadings, demonstrating the meaningful contribution of each item to the measurement model. The Omega coefficient indicated acceptable to excellent internal consistency for the constructs. Model fit statistics demonstrated good model fit for the PCPI-S and PCPI-C.
The findings support the construct validity and internal consistency of the PCPI-S and PCPI-C in assessing person-centred practice among healthcare professionals and patients in French-speaking Switzerland. This validation offers valuable tools for evaluating person-centred care in hospital settings.
Peer Review reports
Person-centred care is an approach to healthcare that prioritises the individual needs, preferences, and values of the patient [ 1 ]. This approach recognises the fundamental role of patients as active participants in their own care, emphasizes the genuine relationship between patients and health professionals and acknowledges the context in which the care is delivered [ 2 ]. The shift towards person-centred care has gained momentum over the past few decades and become essential for achieving high-quality healthcare [ 3 ]. Person-centred care is of particular interest to politicians, researchers, and clinicians, as it is associated with improved clinical outcomes [ 4 , 5 ], patient satisfaction [ 4 , 6 , 7 ], work environment factors [ 8 ] and economic outcomes [ 9 , 10 ]. Person-centred care has been implemented across various healthcare settings, including primary care, long-term care and acute care facilities [ 11 , 12 ].
The Person-centred Practice Framework (PCPF) was developed by McCormack and McCance to support healthcare professionals to understand the dimensions of person-centredness and how to implement person-centred care in clinical practice. The PCP Framework comprises five interrelated domains: macro-context, prerequisites, care environment, person‐centred processes, and person‐centred outcomes. The macro-context domain refers to broader societal, cultural, and policy-related factors that influence healthcare practices. The prerequisites domain emphasises the essential organisational and practice-level elements required to support person-centred care. The care environment domain centres on the physical and emotional context in which care is provided. The person-centred processes domain highlights the importance of effective communication, engagement, and collaborative decision-making between patients and healthcare providers, fostering meaningful partnerships in care. Finally, the person-centred outcomes domain focuses on the positive impacts of person-centred care on patients [ 1 ].
Evaluation of person-centred practice is essential for identifying areas for improvement and monitoring its effective implementation within healthcare organisations [ 13 ]. Measurement tools can provide a standardised approach to assess the extent to which care aligns with person-centred principles and to support healthcare professionals in enhancing quality-of-care delivery and tailoring services to meet individual needs [ 14 ]. However, most of the available instruments measuring person-centred practice lack theoretical underpinnings or fail to assess the various aspects of person-centred care comprehensively [ 14 , 15 ]. To address the need of demonstrating the value of person-centred care, the PCPF has guided the development of measurement tools. The Person-Centred Practice Inventory – Staff (PCPI-S) developed by Slater et al. and the Person-Centred Practice Inventory – Care (PCPI-C) are aligned with key dimensions of the PCP Framework, including prerequisites, care environment, and person-centred processes [ 1 , 16 , 17 ]. The psychometric properties of the original version of the PCPI-S are acceptable (root mean square error of approximation (RMSEA) = 0.053, comparative fit index (CFI) = 0.951) with reference to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) criteria: CFI > 0.95, RMSEA < 0.06, standardised root mean residual (SRMR) < 0.08 [ 16 , 18 ]. The PCPI-S was designed for and tested with health care staff across all healthcare settings [ 16 , 19 – 24 ]. The instrument has been developed in English [ 9 ] and then translated into Swiss German, German, Austrian, Norwegian, Malaysian, Spanish and Portuguese [ 19 – 25 ]. The psychometric properties of the original version of the PCPI-C have not yet been published. By capturing the perspectives of both healthcare professionals and patients, the PCPI-S and the PCPI-C provide a comprehensive assessment of person-centred care [ 16 ]. Validation efforts are required to determine whether the PCPI-S and the PCPI-C translated into French provide valid measures of person-centred practice [ 16 ].
The aim of this study was to evaluate the construct validity and internal consistency of the PCPI-S and the PCPI-C among health care staff and patients in the French-speaking part of Switzerland.
This multi-centred cross-sectional study was conducted between March and August 2022. We invited Chief Nursing Officers (CNOs) of major public hospitals in the French-speaking part of Switzerland to participate. Out of those contacted, six hospitals agreed to take part in the study. Following this initial outreach, the project was introduced to the departments selected by the CNO. Subsequently, the unit participation was determined by the management teams. Notably, there were no specific criteria for the selection of units, as the PCPI-S and PCPI-C were intended for use across various healthcare settings and by professionals of different disciplines. Participating study sites included medical and surgical units, obstetrics/gynaecology/maternity, oncology, rehabilitation and geriatrics, neurology, outpatient care, and psychiatry.
The PCPI-S and the PCPI-C were translated into French prior to this study by using principles of good practice for the translation and cultural adaptation of patient reported outcomes measures [ 26 ]. Two nurses with a master’s degree, independently translated the PCPI-S and PCPI-C into French and then confer to reach consensus on the provisional forward translation. Then, two other back translators were blind to the source language scales. Finally, a consensus was reach with the translation team.
The PCPI-S consists of 17 dimensions with 59 items about the three domains of the theoretical framework: prerequisites, care environment, and person-centred process. The prerequisites include five constructs: being professionally competent (Q1-Q3), developing interpersonal skills (Q4-Q7), showing commitment to work (Q8-Q12), knowing oneself (Q13-Q15), and being able to clearly demonstrate one’s beliefs and values (Q16-Q18) [ 27 ]. The care environment comprises seven constructs: appropriate skill mix (Q19-Q21), shared decision-making system (Q22-Q25), effective relationships between team members (Q26-Q28), power sharing (Q29-Q32), potential for innovation and risk-taking (Q33-Q35), physical environment (Q36-Q38), and supportive organisational system (Q39-Q43) [ 27 ]. The person-centred processes have five constructs: working with the patient’s beliefs and values (Q44-Q47), shared decision-making (Q48-Q50), authentic engagement in the relationship (Q51-Q53), being present with caring (Q54-Q56), and working holistically with the whole person (Q57-Q59).
Items are scored on a 5-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). The score for each construct is obtained by averaging the total items in the construct. The total score is obtained by averaging the scores of the constructs. Pearson’s correlation coefficient is used to calculate the correlations between the three main domains of the PCPI-S (prerequisites, care environment, and person-centred process).
The PCPI-C comprises 18 items aimed at evaluating patients’ agreement levels with statements regarding the person-centred process dimensions described in the PCPF. The PCPI-C comprises five constructs: working with the person’s beliefs and values (Q1-14-7-6), sharing decision-making (Q3-17-20-10), engaging authentically (Q12-18-9), being sympathetically present (Q16-5-2), and working holistically (Q 15-8-4-19). The PCPI-C uses a 5-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). The score for each construct is obtained by averaging the scores of the items in the construct. The total score is obtained by averaging the scores of the constructs.
The following characteristics were collected from health care staff: gender, age, profession, level of training, additional training, years of experience, care unit, activity rate, and years of experience in the current unit. Patients characteristics were retrieved from health electronic records and included gender, age, length of hospital stay at the time of completing the PCPI-C and whether patients were in single or shared room as it could influence the perception of care environment.
All health care staff members from participating units who were directly involved in patient care were invited to participate in completing the PCPI-S. A sample of patient participants was recruited on a voluntary basis from the participating units. Inclusion criteria for patients included being 18 years or older, proficient in reading and understanding French, and deemed cognitively capable by the healthcare team to complete the PCPI-C. The target sample size was 600 healthcare staff members and 200 patients to meet the criteria defined by the COSMIN [ 18 ].
An email containing the URL to access the online PCPI-S was sent to healthcare staff members within the participating units. A data collection day was organized at each participating unit in the six hospitals. During this day, eligible patients were identified by the healthcare team. The study’s purpose and questionnaire were orally explained to the participants by the researcher. For participants capable of completing it independently, the PCPI-C paper questionnaire was provided and collected after completion at the end of the day. For patients who were unable to complete the questionnaire due to visual or motor impairments, the researcher either assisted in reading the questionnaire or provided physical support. The researcher paid careful attention to reading the questionnaire faithfully and avoiding influencing the participants’ responses.
Descriptive statistical analyses of the instruments and participants’ characteristics were performed by calculating mean and standard deviation.
For assessing psychometric properties, confirmatory factor analysis (CFA) was performed based on the structure of the PCPF theoretical framework. The parameters of the structural equation model were estimated by using the maximum likelihood ratio method. Missing data were left in the analyses and the maximum likelihood with missing (MLMV) model was used in Structural Equation Modelling. The internal consistency of the instruments was determined by using the McDonald’s Omega coefficient. The Omega coefficient can be judged as acceptable at over 0.70. Model fits were assessed using three fit indices and their goodness of fit criteria: root mean square error of approximation (RMSEA) (< 0.08), comparative fit index (CFI) (> 0.90), and standardized root mean square residuals (SRMR) (< 0.08). At least one of these criteria should be met to support the construct validity [ 28 ]; if the non-centrality index, RMSEA, is > 90%; and if its parsimony index, the Akaike information criterion, is lowest [ 29 ]. Analyses were performed by using Stata/IC software 17 [ 30 ].
The study was submitted and approved by the ethics committee of the canton of Vaud (CER-VD 2020 − 01562). All participants were informed about the study and gave consent to participate.
A total sample of 558 healthcare staff members completed the PCPI-S. They were predominantly women (85%) and worked as nurses (62%). Most staff members worked in medical (33%) and surgical wards (15%). Patient participants ( n = 510) were 70 years old on average and women accounted for half of the sample (51%). The mean length of stay when completing the PCPI-C was 11 days. Patients were mostly hospitalised in medical (36%) and surgical wards (27%) (Table 1 ).
All items of the PCPI-S and PCPI-C received positive scores, with mean scores ranging from 2.49 to 4.54. For the patient sample, there were 4 missing responses (0.8%) for questions 1 and 3, to 14 missing responses (2.8%) for questions 19 and 20. For the caregiver sample, there were 1 missing responses (0.2%) for questions 2 to 6, up to 68 missing responses (15%) for questions 44 to 59. Pearson’s correlation coefficient indicates statistically significant positive correlations between the three main domains of the PCPI-S: prerequisites and care environment ( r = 0.57, p < 0.01), prerequisites and person-centred process ( r = 0.72, p < 0.01), and care environment and person-centred process ( r = 0.49, p < 0.01). Factor loadings ranged from 0.35 to 0.89, with the majority exceeding 0.5. Notably, all factor loadings were statistically significant (standard error < 0.9; p < 0.01) and made meaningful contributions to the measurement model. As a result, these items were retained in the analysis [ 31 ]. Detailed factor loadings are presented in additional files 1 and 2.
In the case of the PCPI-S scale, the Omega coefficients for each domain were deemed acceptable, ranging from 0.87 for the Prerequisites factor to 0.93 for Person-centred processes. The specific Omega coefficients for each factor can be found in additional files 1 and 2. Regarding the PCPI-C scale, the Omega coefficients for each construct were also found to be acceptable, ranging from 0.64 for the Engaging Authenticity factor to 0.74 for Patient Beliefs and Values. The Omega coefficients for each factor are detailed in additional files 1 and 2. Furthermore, the Omega coefficient for the person-centred processes domain was outstanding, scoring at 0.92.
The model fit statistics of the three constructs indicated a good model fit, with a RMSEA close to 0.06, a 90% higher bracket below 0.09, a CFI of 0.90 or higher, and an SRMR less than 0.08. The detailed scores are set out in Table 2 .
The results of the psychometric analysis of the PCPI-S demonstrate good construct validity and internal consistency, thereby confirming the underlying principles of the theoretical PCP Framework. The model fit statistics consistently indicate a good fit for the three constructs within the PCPI-S. The PCPI-C demonstrates a reasonable to acceptable fit, indicating that while the model is not perfect, it is sufficiently robust for practical applications.
Examining the psychometric properties across different linguistic versions of the PCPI-S provides valuable insights into the instrument’s consistency and internal consistency across diverse cultural and linguistic contexts. In the present study, the Omega coefficient values for the PCPI-S and PCPI-C were consistently above 0.70, indicating robust internal consistency. The results for the PCPI-S are in line with previous research conducted in Swiss German, Austrian, Norwegian, Malaysian and Portuguese studies, which reported high Cronbach’s alpha scores (a > 0.70) [ 19 – 23 , 25 ]. These findings confirm the instrument’s strong internal consistency when measuring person-centred care constructs.
Regarding the RMSEA values, the PCPI-S versions in Swiss German, German, Norwegian, and Malaysian studies consistently indicated a good model fit, with RMSEA values ranging from 0.041 to 0.078 [ 19 – 22 ]. All these values were close to 0.06, indicating a good model fit. In the PCPI-S French version, RMSEA values ranged from 0.000 to 0.078. Although the RMSEA for prerequisites and the care environment was slightly higher in the French version than in the Swiss German and Norwegian versions, the RMSEA for person-centred processes was notably lower, suggesting an good fit for this construct in the present study.
The CFI values for the PCPI-S were generally above 0.90 across different linguistic versions, supporting the instrument’s construct validity and internal consistency. In the present study, the CFI values ranged from 0.85 for the care environment to 1.00 for person-centred processes, indicating an excellent fit for this construct. However, the slightly lower CFI for the care environment was consistent with findings in other studies.
The variations observed in different studies across languages may be attributed to linguistic nuances, cultural differences, or contextual factors specific to each linguistic group. These differences highlight the importance of adapting the instruments to the cultural and linguistic context in which they are used, emphasizing the ongoing need for validation and adaptation efforts. The findings from translations into French, Swiss German, German, Norwegian, and Malaysian languages collectively underscore the robustness and adaptability of the PCPI as a tool for assessing person-centred practice in diverse cultural contexts. The consistently high Cronbach’s alpha scores, meaningful factor loadings, and favourable GFIs in these translations suggest that the PCPI maintains its internal consistency and construct validity when applied in different linguistic and cultural settings.
The PCPI has demonstrated strong internal consistency and good model fit across different linguistic versions. While the French-translated PCPI-S shows promising construct validity, its length may pose a challenge for widespread clinical adoption. Considering the time constraints frequently encountered in healthcare settings, there is a need for future research to design a shorter yet psychometrically robust version of the scale. This would enable quicker and more efficient assessments of patient-centred care without compromising measurement quality.
The availability of instruments aligned with a theoretical person-centred framework provides healthcare staff with a standardised measure to evaluate the degree of alignment with person-centered principles in care delivery. Consistent use of the PCPI-S and PCPI-C enable healthcare staff to collectively identify areas that require improvement, thereby fostering a continuous quality improvement process. Furthermore, insights gained from the PCPI-S and PCPI-C could inform the development of training programs aimed at enhancing person-centered care competencies among healthcare professionals.
The large participation of both healthcare staff members and patient from six hospital and multiple clinical settings enhance generalisability of the results and confidence in the findings. Nonetheless, certain limitations should be acknowledged. The sample predominantly comprising nurses, and the relative homogeneity in participants’ responses could suggest a limited familiarity with the person-centred principles and the PCPF among both professionals and patients. While this study used CFA for psychometric analysis, further psychometric validation of the PCPI-S in French should include additional analyses such as a test-retest procedure and concurrent validity assessment. Finally, as professionals and patients participated on a voluntary basis, we cannot exclude a potential selection and desirability bias.
The psychometric analysis conducted in this study indicates high construct validity and internal consistency for the French translation of both the PCPI-S and the PCPI-C. The results presented in this article will enable international comparative studies and support the further development of person-centred care in French-speaking clinical settings.
The datasets used and analysed during the current study are available on www.zenodo.org. DOI: 10.5281/zenodo.10849449.
Person-centred Practice Inventory – Staff
Person-centred Practice Inventory – Care
Person-centred Practice Framework
Consensus-based Standards for the selection of health Measurement Instruments
Root mean square error of approximation
Standardised root mean residual
Comparative fit index
Goodness-of-fit index
Confidence interval
McCormack B, McCance T, Bulley C, Brown D, McMillan A, Martin S. Fundamentals of person-centred healthcare practice. 1; Wiley Blackwell; 2021.
Kitson A, Marshall A, Bassett K, Zeitz K. What are the core elements of patient-centred care? A narrative review and synthesis of the literature from health policy, medicine and nursing. J Adv Nurs. 2013;69:4–15. https://doi.org/10.1016/j.ijnurstu.2017.11.007
McCormack B, Borg M, Cardiff S, Dewing J, Jacobs G, Janes N, et al. Person-centredness – the ‘state’ of the art. Int Pract Dev J. 2015;5.
Rathert C, Wyrwich MD, Boren SA. Patient-centered care and outcomes:a systematic review of the literature. Med Care Res Rev. 2013;70:351–79. https://doi.org/10.1177/1077558712465774 .
Olsson LE, Jakobsson Ung E, Swedberg K, Ekman I. Efficacy of person-centred care as an intervention in controlled trials – a systematic review. J Clin Nurs. 2013;22:456–65. https://doi.org/10.1111/jocn.12039 .
McMillan SS, Kendall E, Sav A, King MA, Whitty JA, Kelly F, et al. Patient-centered approaches to health care: a systematic review of randomized controlled trials. Med Care Res Rev. 2013;70:567–96. https://doi.org/10.1177/1077558713496318 .
Sidani S. Effects of patient-centered care on patient outcomes: an evaluation. Res Theory Nurs Pract. 2008;22:24–37.
Bachnick S, Ausserhofer D, Baernholdt M, Simon M. Patient-centered care, nurse work environment and implicit rationing of nursing care in Swiss acute care hospitals: a cross-sectional multi-center study. Int J Nurs Stud. 2018;81:98–106. https://doi.org/10.1016/j.ijnurstu.2017.11.007 .
Stone S. A retrospective evaluation of the impact of the planetree patient-centered model of care on inpatient quality outcomes. HERD: J. Environ. Health Res Des J. 2008;1:55–69. https://doi.org/10.1177/193758670800100406 .
Almalki ZS, Alotaibi AA, Alzaidi WS, Alghamdi AA, Bahowirth AM, Alsalamah NM. Economic benefits of implementing patient-centered medical home among patients with hypertension. Clinicoecon Outcomes Res. 2018;10:665–73. https://doi.org/10.2147/ceor.S179337 .
Park M, Giap TT, Lee M, Jeong H, Jeong M, Go Y. Patient- and family-centered care interventions for improving the quality of health care: a review of systematic reviews. Int J Nurs Stud. 2018;87:69–83. https://doi.org/10.1016/j.ijnurstu.2018.07.006 .
Janerka C, Leslie GD, Gill FJ. Development of patient-centred care in acute hospital settings: a metanarrative review. Int J Nurs Stud. 2023;140:104465. https://doi.org/10.1016/j.ijnurstu.2023.104465 .
McCormack B. Person-centred care and measurement: the more one sees, the better one knows where to look. J Health Serv Res Pol. 2022;27:85–7. https://doi.org/10.1177/13558196211071041 .
Edvardsson D, Innes A. Measuring person-centered care: a critical comparative review of published tools. Gerontologist. 2010;50:834–46. https://doi.org/10.1093/geront/gnq047 .
McCormack B. Person-centred care and measurement: the more one sees, the better one knows where to look. J Health Serv Res Policy. 2022;27:85–7. https://doi.org/10.1177/13558196211071041 .
Slater P, McCance T, McCormack B. The development and testing of the person-centred practice inventory – staff (PCPI-S). Int J Qual Health Care. 2017;29:541–7. https://doi.org/10.1093/intqhc/mzx066 .
(PCP-ICoP). TP-cPICoPC. Person-centred practice international community of practice version 1 2024. [Available from: http://www.pcp-icop.org ].
Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J clin epidemiol. 2010;63:737–45. https://doi.org/10.1016/j.jclinepi.2010.02.006 .
von Dach C, Schlup N, Gschwenter S, McCormack B. German translation, cultural adaptation and validation of the person-centred practice inventory—staff (PCPI-S). BMC Health Serv Res. 2023;23:458. https://doi.org/10.1186/s12913-023-09483-8 .
Bing-Jonsson PC, Slater P, McCormack B, Fagerström L. Norwegian translation, cultural adaption and testing of the Person-centred Practice Inventory – Staff (PCPI-S). BMC Health Services Research. 2018;18:555. https://doi.org/10.1186/s12913-018-3374-5 .
Weis MLD, Wallner M, Köck-Hódi S, Hildebrandt C, McCormack B, Mayer H. German translation, cultural adaptation and testing of the person-centred practice inventory - staff (PCPI-S). Nurs Open. 2020;7:1400–11. https://doi.org/10.1002/nop2.511 .
Balqis-Ali NZ, Saw PS, Jailani AS, Fun WH, Mohd Saleh N, Tengku Bahanuddin TPZ, et al. Cross-cultural adaptation and exploratory factor analysis of the person-centred practice inventory - staff (PCPI-S) questionnaire among Malaysian primary healthcare providers. BMC Health Serv Res. 2021;21:32. https://doi.org/10.1186/s12913-020-06012-9 .
Balqis-Ali NZ, Saw PS, Anis-Syakira J, Fun WH, Sararaks S, Lee SWH, et al. Healthcare provider personcentred practice: relationships between prerequisites, care environment and care processes using structural equation modelling. BMC Health Serv Res. 2022;22:576. https://doi.org/10.1186/s12913-022-07917-3 .
Errasti-Ibarrondo B, La Rosa-Salas V, Lizarbe-Chocarro M, Gavela-Ramos Y, Choperena A, Arbea Moreno L, et al. [Translation and transcultural adaptation of the Person-Centred Practice Inventory Staff (PCPI-S) for health professionals in Spain]. An Sist Sanit Navar. 2023;46. https://doi.org/10.23938/assn.1039 .
Ventura F, Costa P, Chaplin J, Domingues I, Jorge de Oliveira Ferreira R, McCormack B, et al. Portuguese translation, cultural adaptation, and validation of the Person-Centred Practice Inventory - Staff (PCPI-S). Ciência Saûde Coletiva. 2023;28. https://doi.org/10.1590/1413-812320232811.17072022 .
Wild D, Grove A, Martin M, Eremenco S, McElroy S, Verjee-Lorenz A, et al. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) Measures: Report of the ISPOR Task Force for Translation and Cultural Adaptation. Value in Health. 2005;8:94–104. https://doi.org/10.1111/j.1524-4733.2005.04054.x .
Slater E, McCormack B, McCance T. The person centred practice inventory – staff score sheet.
Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27:1147–57. https://doi.org/10.1007/s11136-018-1798-3 .
Hoe S. Issues and procedures in adopting structural equation modelling technique. J Qual Methods 2008;3:76–83.
StataCorp. Stata Statistical Software: Release 17. College Station, TX: StataCorporation; 2021.
Weiber R, Sarstedt M. Strukturgleichungsmodellierung. Eine anwendungsorientierte Einführung in die Kausalanalyse mit Hilfe von AMOS, SmartPLS und SPSS: Springer Gabler; 2021.
Download references
The authors would like to thank all the healthcare professionals and patients who gave their time to participate in this study.
Not applicable.
Open access funding provided by University of Lausanne
Authors and affiliations.
Institute of Higher Education and Research in Healthcare (IUFRS), Lausanne University Hospital, University of Lausanne, Lausanne, Switzerland
Cedric Mabire & Joanie Pellet
Geneva Institution for Home Care and Assistance (IMAD), Geneva, Switzerland
Marie Piccot-Crezollet
Sydney Nursing School, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
Vaibhav Tyagi & Brendan McCormack
You can also search for this author in PubMed Google Scholar
Data collection: MP-C collected data from participants and coordinated the study on hospital sites. Data analysis and interpretation: CM conducted the psychometric analysis. Drafting the article: CM, MP-C, and JP drafted the manuscript. Critical revision of the manuscript: VT and BM. All authors read and approved the final manuscript.
Correspondence to Cedric Mabire .
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Mabire, C., Piccot-Crezollet, M., Tyagi, V. et al. Structural validation of two person-centred practice inventories PCPI-S and PCPI-C - French version. BMC Health Serv Res 24 , 1092 (2024). https://doi.org/10.1186/s12913-024-11432-y
Download citation
Received : 08 December 2023
Accepted : 13 August 2024
Published : 18 September 2024
DOI : https://doi.org/10.1186/s12913-024-11432-y
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6963
Advertisement
The aim of this study is to provide Chinese utility weights for the European Organization for Research and Treatment of Cancer Quality of Life Utility Measure-Core 10 Dimensions (EORTC QLU-C10D) which is a preference-based cancer-specific utility instrument derived from the EORTC QLQ-C30.
We conducted an online survey of the general population in China, with quota sampling for age and gender. Each respondent was asked to complete a discrete choice experimental survey consisting of 16 randomly selected choice sets. The conditional logit model and mixed logit model were used to analyze respondents’ preferences, and the goodness of fit of the model was tested.
A total of 2003 respondents were included in the analysis. Utility decrements within dimensions were typically monotonic. Monotonic inconsistency issues in the Fatigue, Sleep, and Nausea dimensions were normalized by monotonicity correction. Physical functioning, Pain, and Role functioning were associated with the greatest utility weights, with the smallest decrements being in Bowel problems and Emotional functioning. The utility value for the worst health state was 0.083, i.e. slightly higher than being dead.
This study provides the first China-specific set of value for the QLU-C10D based on societal preferences of the Chinese adult general population. The value set can be used as a cancer-specific scoring system for economic evaluations of new oncology therapies and technologies in China.
This study provides the first China-specific set of value for the QLU-C10D based on societal preferences of the Chinese adult general population.
The value set can be used as a cancer-specific scoring system for economic evaluations of new oncology therapies and technologies in China.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Australian utility weights for the eortc qlu-c10d, a multi-attribute utility instrument derived from the cancer-specific quality of life questionnaire, eortc qlq-c30, explore related subjects.
Siegel, R. L., Miller, K. D., Fuchs, H. E., & Jemal, A. (2021). Cancer statistics, 2021. C Ca: A Cancer Journal for Clinicians , 71 (1), 7–33. https://doi.org/10.3322/caac.21654
Article Google Scholar
Sung, H., Ferlay, J., Siegel, R. L., Laversanne, M., Soerjomataram, I., Jemal, A., & Bray, F. (2021). Global Cancer statistics 2020: GLOBOCAN estimates of incidence and Mortality Worldwide for 36 cancers in 185 countries. C Ca: A Cancer Journal for Clinicians , 71 (3), 209–249. https://doi.org/10.3322/caac.21660
Article CAS Google Scholar
McGuire, S., & WHO Press. (2016). World Cancer Report 2014. Geneva, Switzerland: World Health Organization, International Agency for Research on Cancer, 2015. Adv Nutr , 7 (2), 418–419. https://doi.org/10.3945/an.116.012211
World Health Organization. (2020). World cancer report: Cancer research for cancer prevention . International Agency for Research on Cancer.
Zheng, R. S., Zhang, S. W., Sun, K. X., Chen, R., Wang, S. M., Li, L., Zeng, H. M., Wei, W. W., & He, J. (2023). [Cancer statistics in China, 2016]. Zhonghua Zhong Liu Za Zhi , 45 (3), 212–220. https://doi.org/10.3760/cma.j.cn112152-20220922-00647
Article CAS PubMed Google Scholar
United Nations. World population prospects. New York: United Nations (2022). https://population.un.org/wpp/
Ferlay, J., Soerjomataram, I., Dikshit, R., Eser, S., Mathers, C., Rebelo, M., Parkin, D. M., Forman, D., & Bray, F. (2015). Cancer incidence and mortality worldwide: Sources, methods and major patterns in GLOBOCAN 2012. International Journal of Cancer , 136 (5), E359–386. https://doi.org/10.1002/ijc.29210
DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics , 47 , 20–33. https://doi.org/10.1016/j.jhealeco.2016.01.012
Article PubMed Google Scholar
Elkin, E. B., Weinstein, M. C., Winer, E. P., Kuntz, K. M., Schnitt, S. J., & Weeks, J. C. (2004). HER-2 testing and trastuzumab therapy for metastatic breast cancer: A cost-effectiveness analysis. Journal of Clinical Oncology , 22 (5), 854–863. https://doi.org/10.1200/jco.2004.04.158
Khalili, F., Najafi, B., Mansour-Ghanaei, F., Yousefi, M., Abdollahzad, H., & Motlagh, A. (2020). Cost-effectiveness analysis of Colorectal Cancer Screening: A systematic review. Risk Manag Healthc Policy , 13 , 1499–1512. https://doi.org/10.2147/rmhp.S262171
Article PubMed PubMed Central Google Scholar
Clark, T. J., Barton, P. M., Coomarasamy, A., Gupta, J. K., & Khan, K. S. (2006). Investigating postmenopausal bleeding for endometrial cancer: Cost-effectiveness of initial diagnostic strategies. Bjog , 113 (5), 502–510. https://doi.org/10.1111/j.1471-0528.2006.00914.x
Hu, S., Gu, S., Qi, C., Wang, S., Qian, F., Shi, C., & Fan, G. (2023). Cost-utility analysis of semaglutide for type 2 diabetes after its addition to the National Medical Insurance System in China. Diabetes, Obesity & Metabolism , 25 (2), 387–397. https://doi.org/10.1111/dom.14881
World Health Organization. WHO Guideline on Country Pharmaceutical Pricing Policies. Geneva: World Health Organization (2020). https://www.who.int/publications/i/item/9789240011878
Yu, H., Zhang, H., Yang, J., Liu, C., Lu, C., Yang, H., Huang, W., Zhou, J., Fu, W., Shi, L., Yan, Y., Liu, G., & Li, L. (2018). Health utility scores of family caregivers for leukemia patients measured by EQ-5D-3L: A cross-sectional survey in China. Bmc Cancer , 18 (1), 950. https://doi.org/10.1186/s12885-018-4855-y
National Institute for Health and Care Excellence. Guide to the Methods of Technology Appraisal 2013. London: National Institute for Health and Care Excellence (NICE) (2013). https://www.nice.org.uk/process/pmg9
Canadian Agency for Drugs and Technologies in Health Guidelines for the Economic Evaluation of Health Technologies: Canada. Ottawa: Canadian Agency for Drugs and Technologies in Health (CADTH). https://www.cadth.ca/
Haute Autorité de Santé Choices in Methods for Economic Evaluation - A Methodological Guide. Saint-Denis La Plaine: Haute Autorité de Santé (HAS). https://www.has-sante.fr/
Drummond, M. F., Aguiar-Ibanez, R., & Nixon, J. (2006). Economic evaluation. Singapore Medical Journal , 47 (6), 456–461. quiz 462.
CAS PubMed Google Scholar
Garau, M., Shah, K. K., Mason, A. R., Wang, Q., Towse, A., & Drummond, M. F. (2011). Using QALYs in cancer: A review of the methodological limitations. Pharmacoeconomics , 29 (8), 673–685. https://doi.org/10.2165/11588250-000000000-00000
King, M. T., Norman, R., Mercieca-Bebber, R., Costa, D. S. J., McTaggart-Cowan, H., Peacock, S., Janda, M., Müller, F., Viney, R., Pickard, A. S., & Cella, D. (2021). The Functional Assessment of Cancer Therapy eight dimension (FACT-8D), a Multi-attribute Utility Instrument Derived from the Cancer-Specific FACT-General (FACT-G) quality of Life Questionnaire: Development and Australian Value Set. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 24 (6), 862–873. https://doi.org/10.1016/j.jval.2021.01.007
Gibson, A. E. J., Longworth, L., Bennett, B., Pickard, A. S., & Shaw, J. W. (2024). Assessing the content validity of preference-based measures in Cancer. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 27 (1), 70–78. https://doi.org/10.1016/j.jval.2023.10.006
López-Bastida, J., Oliva, J., Antoñanzas, F., García-Altés, A., Gisbert, R., Mar, J., & Puig-Junoy, J. (2010). Spanish recommendations on economic evaluation of health technologies. The European Journal of Health Economics , 11 (5), 513–520. https://doi.org/10.1007/s10198-010-0244-4
King, M. T., Costa, D. S., Aaronson, N. K., Brazier, J. E., Cella, D. F., Fayers, P. M., Grimison, P., Janda, M., Kemmler, G., Norman, R., Pickard, A. S., Rowen, D., Velikova, G., Young, T. A., & Viney, R. (2016). QLU-C10D: A health state classification system for a multi-attribute utility measure based on the EORTC QLQ-C30. Quality of Life Research , 25 (3), 625–636. https://doi.org/10.1007/s11136-015-1217-y
King, M. T., Viney, R., Simon Pickard, A., Rowen, D., Aaronson, N. K., Brazier, J. E., Cella, D., Costa, D. S. J., Fayers, P. M., Kemmler, G., McTaggart-Cowen, H., Mercieca-Bebber, R., Peacock, S., Street, D. J., Young, T. A., & Norman, R. (2018). Australian utility weights for the EORTC QLU-C10D, a Multi-attribute Utility Instrument Derived from the Cancer-Specific Quality of Life Questionnaire, EORTC QLQ-C30. Pharmacoeconomics , 36 (2), 225–238. https://doi.org/10.1007/s40273-017-0582-5
McTaggart-Cowan, H., King, M. T., Norman, R., Costa, D. S. J., Pickard, A. S., Regier, D. A., Viney, R., & Peacock, S. J. (2019). The EORTC QLU-C10D: The Canadian valuation study and algorithm to Derive Cancer-Specific Utilities from the EORTC QLQ-C30. MDM Policy Pract , 4 (1), 2381468319842532. https://doi.org/10.1177/2381468319842532
Kemmler, G., Gamper, E., Nerich, V., Norman, R., Viney, R., Holzner, B., & King, M. (2019). German value sets for the EORTC QLU-C10D, a cancer-specific utility instrument based on the EORTC QLQ-C30. Quality of Life Research , 28 (12), 3197–3211. https://doi.org/10.1007/s11136-019-02283-w
Norman, R., Mercieca-Bebber, R., Rowen, D., Brazier, J. E., Cella, D., Pickard, A. S., Street, D. J., Viney, R., Revicki, D., & King, M. T. (2019). U.K. utility weights for the EORTC QLU-C10D. Health Economics , 28 (12), 1385–1401. https://doi.org/10.1002/hec.3950
Gamper, E. M., King, M. T., Norman, R., Efficace, F., Cottone, F., Holzner, B., & Kemmler, G. (2020). EORTC QLU-C10D value sets for Austria, Italy, and Poland. Quality of Life Research , 29 (9), 2485–2495. https://doi.org/10.1007/s11136-020-02536-z
Article CAS PubMed PubMed Central Google Scholar
Nerich, V., Gamper, E. M., Norman, R., King, M., Holzner, B., Viney, R., & Kemmler, G. (2021). French Value-Set of the QLU-C10D, a Cancer-specific utility measure derived from the QLQ-C30. Applied Health Economics and Health Policy , 19 (2), 191–202. https://doi.org/10.1007/s40258-020-00598-1
Jansen, F., Verdonck-de Leeuw, I. M., Gamper, E., Norman, R., Holzner, B., King, M., & Kemmler, G. (2021). Dutch utility weights for the EORTC cancer-specific utility instrument: The Dutch EORTC QLU-C10D. Quality of Life Research , 30 (7), 2009–2019. https://doi.org/10.1007/s11136-021-02767-8
Revicki, D. A., King, M. T., Viney, R., Pickard, A. S., Mercieca-Bebber, R., Shaw, J. W., Müller, F., & Norman, R. (2021). United States Utility Algorithm for the EORTC QLU-C10D, a Multiattribute Utility Instrument based on a Cancer-specific quality-of-life instrument. Medical Decision Making , 41 (4), 485–501. https://doi.org/10.1177/0272989x211003569
Finch, A. P., Gamper, E., Norman, R., Viney, R., Holzner, B., King, M., & Kemmler, G. (2021). Estimation of an EORTC QLU-C10 value set for Spain using a Discrete Choice Experiment. Pharmacoeconomics , 39 (9), 1085–1098. https://doi.org/10.1007/s40273-021-01058-x
Lehmann, J., Rojas-Concha, L., Petersen, M. A., Holzner, B., Norman, R., King, M. T., & Kemmler, G. (2024). Danish value sets for the EORTC QLU-C10D utility instrument. Quality of Life Research , 33 (3), 831–841. https://doi.org/10.1007/s11136-023-03569-w
Shiroiwa, T., King, M. T., Norman, R., Müller, F., Campbell, R., Kemmler, G., Murata, T., Shimozuma, K., & Fukuda, T. (2024). Japanese value set for the EORTC QLU-C10D: A multi-attribute utility instrument based on the EORTC QLQ-C30 cancer-specific quality-of-life questionnaire. Quality of Life Research , 33 (7), 1865–1879. https://doi.org/10.1007/s11136-024-03655-7
Luo, N., Liu, G., Li, M., Guan, H., Jin, X., & Rand-Hendriksen, K. (2017). Estimating an EQ-5D-5L value set for China. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 20 (4), 662–669. https://doi.org/10.1016/j.jval.2016.11.016
Liu, G. G., Wu, H., Li, M., Gao, C., & Luo, N. (2014). Chinese time trade-off values for EQ-5D health states. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 17 (5), 597–604. https://doi.org/10.1016/j.jval.2014.05.007
Zhuo, L., Xu, L., Ye, J., Sun, S., Zhang, Y., Burstrom, K., & Chen, J. (2018). Time Trade-Off Value set for EQ-5D-3L based on a nationally Representative Chinese Population Survey. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 21 (11), 1330–1337. https://doi.org/10.1016/j.jval.2018.04.1370
Liu, G. G., Guan, H., Jin, X., Zhang, H., Vortherms, S. A., & Wu, H. (2022). Rural population’s preferences matter: A value set for the EQ-5D-3L health states for China’s rural population. Health and Quality of Life Outcomes , 20 (1), 14. https://doi.org/10.1186/s12955-022-01917-x
Yang, Z., Jiang, J., Wang, P., Jin, X., Wu, J., Fang, Y., Feng, D., Xi, X., Li, S., Jing, M., Zheng, B., Huang, W., & Luo, N. (2022). Estimating an EQ-5D-Y-3L value set for China. Pharmacoeconomics , 40 (Suppl 2), 147–155. https://doi.org/10.1007/s40273-022-01216-9
Wu, J., Xie, S., He, X., Chen, G., Bai, G., Feng, D., Hu, M., Jiang, J., Wang, X., Wu, H., Wu, Q., & Brazier, J. E. (2021). Valuation of SF-6Dv2 Health states in China using Time Trade-off and discrete-choice experiment with a duration dimension. Pharmacoeconomics , 39 (5), 521–535. https://doi.org/10.1007/s40273-020-00997-1
Peeters, Y., & Stiggelbout, A. M. (2010). Health state valuations of patients and the general public analytically compared: A meta-analytical comparison of patient and population health state utilities. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 13 (2), 306–309. https://doi.org/10.1111/j.1524-4733.2009.00610.x
Stiggelbout, A. M., & de Haes, J. C. (2001). Patient preference for cancer therapy: An overview of measurement approaches. Journal of Clinical Oncology , 19 (1), 220–230. https://doi.org/10.1200/jco.2001.19.1.220
Gamper, E. M., King, M. T., Norman, R., Loth, F. L. C., Holzner, B., & Kemmler, G. (2022). The EORTC QLU-C10D discrete choice experiment for cancer patients: A first step towards patient utility weights. J Patient Rep Outcomes , 6 (1), 42. https://doi.org/10.1186/s41687-022-00430-5
Gamper, E. M., Holzner, B., King, M. T., Norman, R., Viney, R., Nerich, V., & Kemmler, G. (2018). Test-retest reliability of Discrete Choice experiment for valuations of QLU-C10D Health states. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 21 (8), 958–966. https://doi.org/10.1016/j.jval.2017.11.012
Norman, R., Viney, R., Aaronson, N. K., Brazier, J. E., Cella, D., Costa, D. S., Fayers, P. M., Kemmler, G., Peacock, S., Pickard, A. S., Rowen, D., Street, D. J., Velikova, G., Young, T. A., & King, M. T. (2016). Using a discrete choice experiment to value the QLU-C10D: Feasibility and sensitivity to presentation format. Quality of Life Research , 25 (3), 637–649. https://doi.org/10.1007/s11136-015-1115-3
Mulhern, B., Norman, R., Street, D. J., & Viney, R. (2019). One method, many methodological choices: A structured review of Discrete-Choice experiments for Health State Valuation. Pharmacoeconomics , 37 (1), 29–43. https://doi.org/10.1007/s40273-018-0714-6
National Bureau of Statistics of China. (2021). China Statistical Yearbook . China Statistic Publishing House.
National Bureau of Statistics of China. (2020). The 2019 Population Census of the people’s Republic of China . China Statistic Publishing House.
United Nations Statistics Division. (2010). Population Censuses’ Datasets . United Nations.
Norman, R., Kemmler, G., Viney, R., Pickard, A. S., Gamper, E., Holzner, B., Nerich, V., & King, M. (2016). Order of Presentation of Dimensions Does Not Systematically Bias Utility Weights from a Discrete Choice Experiment. Value in health: the journal of the International Society for Pharmacoeconomics and Outcomes Research, 19(8), 1033–1038. http://www.99885.net/doi.php?doi=10.1016/j.jval.2016.07.003
National Health Commission of China. (2020). China Health Statistics Yearbook . Peking Union Medical College Publishing House.
Aaronson, N. K., Ahmedzai, S., Bergman, B., Bullinger, M., Cull, A., Duez, N. J., Filiberti, A., Flechtner, H., Fleishman, S. B., de Haes, J. C., et al. (1993). The European Organization for Research and Treatment of Cancer QLQ-C30: A quality-of-life instrument for use in international clinical trials in oncology. Journal of the National Cancer Institute , 85 (5), 365–376. https://doi.org/10.1093/jnci/85.5.365
Giesinger, J. M., Efficace, F., Aaronson, N., Calvert, M., Kyte, D., Cottone, F., Cella, D., & Gamper, E. M. (2021). Past and current practice of patient-reported outcome measurement in Randomized Cancer clinical trials: A systematic review. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 24 (4), 585–591. https://doi.org/10.1016/j.jval.2020.11.004
Gao, S., Corrigan, P. W., Qin, S., & Nieweglowski, K. (2019). Comparing Chinese and European American mental health decision making. Journal of Mental Health (Abingdon, England) , 28 (2), 141–147. https://doi.org/10.1080/09638237.2017.1417543
Scott, N. W., Fayers, P. M., Bottomley, A., Aaronson, N. K., de Graeff, A., Groenvold, M., Koller, M., Petersen, M. A., & Sprangers, M. A. (2006). Comparing translations of the EORTC QLQ-C30 using differential item functioning analyses. Qual Life Res , 15 (6), 1103–1115; discussion 1117–1120. https://doi.org/10.1007/s11136-006-0040-x
Shiroiwa, T., Ikeda, S., Noto, S., Igarashi, A., Fukuda, T., Saito, S., & Shimozuma, K. (2016). Comparison of Value Set based on DCE and/or TTO Data: Scoring for EQ-5D-5L Health states in Japan. Value In Health : The Journal of the International Society for Pharmacoeconomics and Outcomes Research , 19 (5), 648–654. https://doi.org/10.1016/j.jval.2016.03.1834
Yang, Z., van Busschbach, J., Timman, R., Janssen, M. F., & Luo, N. (2017). Logical inconsistencies in time trade-off valuation of EQ-5D-5L health states: Whose fault is it? PLoS One , 12 (9), e0184883. https://doi.org/10.1371/journal.pone.0184883
Jin, X., Liu, G. G., Luo, N., Li, H., Guan, H., & Xie, F. (2016). Is bad living better than good death? Impact of demographic and cultural factors on health state preference. Quality of Life Research , 25 (4), 979–986. https://doi.org/10.1007/s11136-015-1129-x
Download references
This work was supported by the National Natural Science Foundation of China (Grant No. 71974048, 72274045) and European Organization for Research and Treatment of Cancer under Grant (No. 002/2014).
Yiyin Cao and Juan Xu have made equal contributions and shared first authorship.
School of Health Management, Harbin Medical University, Harbin, China
Yiyin Cao, Juan Xu & Weidong Huang
Shenzhen Center, Cancer Hospital Chinese Academy of Medical Sciences, Shenzhen, China
School of Public Health, Curtin University, Perth, Australia
Richard Norman
Faculty of Science, School of Psychology, University of Sydney, Sydney, Australia
Madeleine T. King
Department of Psychiatry 1, Innsbruck Medical University, Innsbruck, Austria
Georg Kemmler
Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
You can also search for this author in PubMed Google Scholar
Concept and design: Weidong Huang, Nan Luo, Yiyin Cao and Juan Xu. Acquisition of data: Georg Kemmler, Weidong Huang. Analysis and interpretation of data: Richard Norman, Madeleine T. King and Georg Kemmler. Drafting of the manuscript: Yiyin Cao, Juan Xu, Weidong Huang, Richard Norman, Madeleine T. King and Georg Kemmler. Critical revision of paper for important intellectual content: Nan Luo, Richard Norman, Madeleine T. King and Georg Kemmler. All authors have read and agreed to the published version of the manuscript.
Correspondence to Weidong Huang .
Conflict of interest.
Madeleine King is the founding chair of the MAUCa Consortium. Richard Norman, Georg Kemmler and Nan Luo are members of MAUCa Consortium. Georg Kemmler and Madeleine T. King are members of EORTC QOL Group. As instrument developers, we are prone to bias towards our MAUI. All authors have no conflict of interest with each other.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was approved by the Ethics Committee of Harbin Medical University (project identification code: HMUIRB2023005).
Informed consent was obtained from all individual participants included in the study.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Cao, Y., Xu, J., Norman, R. et al. Chinese utility weights for the EORTC cancer-specific utility instrument QLU-C10D. Qual Life Res (2024). https://doi.org/10.1007/s11136-024-03776-z
Download citation
Accepted : 24 August 2024
Published : 13 September 2024
DOI : https://doi.org/10.1007/s11136-024-03776-z
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
VIDEO
COMMENTS
Survey Instruments in Research Methods. The following are some commonly used survey instruments in research methods: Questionnaires: A questionnaire is a set of standardized questions designed to collect information about a specific topic. Questionnaires can be administered in different ways, including in person, over the phone, or online.
Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: ... A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people ...
Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.
Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.
Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" (Check & Schutt, 2012, p. 160). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research ...
A research instrument is a tool you will use to help you collect, measure and analyze the data you use as part of your research. ... Surveys (online or in-person). In survey research, you are posing questions in which you ask for a response from the person taking the survey. You may wish to have either free-answer questions such as essay style ...
Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...
Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.
Research instruments are measurement tools, such as questionnaires, scales, and surveys, that researchers use to measure variables in research studies. In most cases, it is better to use a previously validated instrument rather than create one from scratch. Always evaluate instruments for relevancy, validity, and reliability.
This is where a validated survey instrument comes in to the questionnaire design. Validated instruments are those that have been extensively tested and are correctly calibrated to their target. ... Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore ...
Types of Research Instruments: Surveys Survey research encompasses any measurement procedures that involve asking questions of respondents. The types of surveys can vary on the span of time used to conduct the study. They can be comprised of cross-sectional surveys and/or longitudinal surveys. Types of questions asked in surveys include:
Research Instruments: Surveys, Questionnaires, and other Measurement Tools. This table is based on the work of Joanne Rich and Janet Schnall at the University of Washington Health Sciences Library. See their website for much more information on finding research instruments.
survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a large group of people" (p. 77). Surveys can also be used to assess needs, evaluate demand, and examine impact (Salant & Dillman, 1994, p. 2). The term . survey instrument. is often used ...
Definition: A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people. It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective.
What are Research Instruments? A research instrument is a tool used to collect, measure, and analyze data related to your subject. Research instruments can be tests, surveys, scales, questionnaires, or even checklists. To assure the strength of your study, it is important to use previously validated instruments! Getting Started.
Questionnaires can be used as the sole research instrument (such as in a cross sectional survey) or within clinical trials or epidemiological studies. Randomised trials are subject to strict reporting criteria, 4 but there is no comparable framework for questionnaire research.
Questionnaires are research instruments (surveys) consisting of a series of questions for the purpose of collecting information about a particular subject. 5 ... be sure to develop survey instruments for each, as respondents from each group might not be able to answer questions written for other groups. For example, youth
Sample size. Statistical power. Survey. Throughout our experience in research and supervising research students, two questions we are always consulted on after getting the research framework done are (1) how to collect the data, and (2) how many individuals do we need. Short answers for these questions are (1) design a survey, and (2) more is ...
In survey research, the instruments that are utilized can be either a questionnaire or an interview (either structured or unstructured). 1. Questionnaires. Typically, a questionnaire is a paper-and-pencil instrument that is administered to the respondents. The usual questions found in questionnaires are closed-ended questions, which are ...
Survey research is a method in which data is collected from a target population, called the sample, by personal interviews, online surveys, the telephone, ... Key Terms and Concepts: Survey instrument: The questionnaire or response item posed to a respondent is called a survey research instrument. The instrument may be a questionnaire or an ...
A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, computer, or post. Questionnaires provide a relatively cheap, quick, and efficient way of ...
A research instrument is a survey, questionnaire, test, scale, rating, or tool designed to measure the variable(s), characteristic(s), or information of interest, often a behavioral or psychological characteristic. Research instruments can be helpful tools to your research study.
The use of research questionnaires or surveys in nursing is a long standing tradition, dating back to the 1960s (Logan, 1966) and 1970s (Oberst, 1978), when the scientific discipline emerged.This type of tool enables nursing researchers to gather primary data from a specific population, whether it is patients, carers, nurses, or other stakeholders to address gaps in the existing evidence base ...
The model was fine-tuned on 3,000 labeled, open-ended responses from the NCHS Research and Development Survey's RANDS during COVID 19 Rounds 1 and 2 surveys. The base SimCSE BERT model was trained on BookCorpus and English Wikipedia. Training procedure. Learning rate: 5e-5. Batch size: 16. Number training epochs: 4. Base Model pooling dimension ...
Background The shift towards person-centred care has become integral in achieving high-quality healthcare, focusing on individual patient needs, preferences, and values. However, existing instruments for measuring person-centred practice often lack theoretical underpinnings and comprehensive assessment. The Person-centred Practice Inventory - Staff (PCPI-S) and the Person-centred Practice ...
Objective The aim of this study is to provide Chinese utility weights for the European Organization for Research and Treatment of Cancer Quality of Life Utility Measure-Core 10 Dimensions (EORTC QLU-C10D) which is a preference-based cancer-specific utility instrument derived from the EORTC QLQ-C30. Methods We conducted an online survey of the general population in China, with quota sampling ...