Reporting the methods used in public health research and practice
Review Article

Reporting the methods used in public health research and practice

Donna F. Stroup1*, C. Kay Smith2*, Benedict I. Truman2*

1Data for Solutions, Inc., Decatur, GA, USA; 2Office of the Associate Director for Science, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, Centers for Disease Control and Prevention, Atlanta, GA, USA

Contributions: (I) Conception and design: All authors; (II) Administrative support: DF Stroup, CK Smith; (III) Provision of study materials or patients: All authors; (IV) Collection and assembly of data: DF Stroup, CK Smith; (V) Data analysis and interpretation: All authors; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

*These authors contributed equally to this work.

Correspondence to: Donna F. Stroup, PhD, MSc. Data for Solutions, Inc., P.O. Box 894, Decatur, GA 30031-0894, USA. Email: donnafstroup@dataforsolutions.com.

Abstract: The methods section of a scientific article often receives the most scrutiny from journal editors, peer reviewers, and skeptical readers because it allows them to judge the validity of the results. The methods section also facilitates critical interpretation of study activities, explains how the study avoided or corrected for bias, details how the data support the answer to the study question, justifies generalizing the findings to other populations, and facilitates comparison with past or future studies. In 2006, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Programme began collecting and disseminating guidelines for reporting health research studies. In addition, guidelines for reporting public health investigations not classified as research have also been developed. However, regardless of the type of study or scientific report, the methods section should describe certain core elements: the study design; how participants were selected; the study setting; the period of interest; the variables and their definitions used for analysis; the procedures or instruments used to measure exposures, outcomes, and their association; and the analyses. Specific requirements for each study type should be consulted during the project planning phase and again when writing begins. We present requirements for reporting methods for public health activities, including outbreak investigations, public health surveillance programs, prevention and intervention program evaluations, research, surveys, systematic reviews, and meta-analyses.

Keywords: Methods; data collection; research reports; medical writing; editorial policies


Received: 26 October 2017; Accepted: 08 December 2017; Published: 21 December 2017.

doi: 10.21037/jphe.2017.12.01


“People’s views about contradictory health studies tend to vary depending on their level of science knowledge. An overwhelming majority of those with high science knowledge say studies with findings that conflict with prior research are a sign that understanding of disease prevention is improving (85%). A smaller majority of those with low science knowledge say the same (65%), while 31% say that the research cannot really be trusted because so many studies conflict with each other.”

—Pew Research Center, February 2017


Introduction

The methods section of a scientific article often receives the most scrutiny from journal editors, peer reviewers, and skeptical readers. Effective reporting of the methods used in public health research and practice enables readers to judge the validity of the study results and other scientists to repeat the study during efforts to validate the findings. Although word limits for a scientific article can hinder complete reporting, full information can usually be provided in online-only or supplemental appendixes. The methods section also facilitates critical interpretation of study results; explains how the study avoided or corrected for bias in selecting participants, measuring exposures and outcomes, and estimating associations between exposures and outcomes; and explains how the data support the answer to the study question. The methods justify generalizing the findings from the sample studied to the population it represents. Finally, complete reporting of methods facilitates comparing the study findings with past and in future studies (e.g., in systematic reviews or meta-analyses).

For example, consider two different conclusions from the same study (1) highlighted in two newspaper headlines (Figure 1). A reader must carefully read the methods section of the original study report (not those of newspaper reports) to assess the validity of each headline. Although the vaccine seemed to substantially lower the infection rate among African Americans and other non-Hispanic minorities in the trial, the numbers of participants from these racial/ethnic groups were too small to support statistically significant conclusions about vaccine efficacy. This “no effect” conclusion is the more valid one, given details in the study methods.

Figure 1 Differences in media interpretation of a single study.

The methods section is usually the easiest and often the first section of the manuscript to be written, often during the protocol development or study-planning phase, then revised and updated after completion of the study to describe what was actually done during the study, including documentation of any changes from initial protocol. Describing the methods completely in the methods section for all aspects of the study is crucial; the readers should not discover something about the methods buried in the results or discussion. Of note, publishers typically specify their requirements for the methods sections in their journals, and those specifications might be beyond those presented here; therefore, reviewing the publisher’s instructions to authors thoroughly before writing begins is always advisable.

This article provides practical guidance for improving the clarity and completeness of the methods section of a scientific article or technical report. We provide a framework based on published standards for reporting the findings of public health research and practice according to major categories of public health activities. This is not a guide for conducting research. Instead, it describes the essential content of the methods section of a scientific article or technical report. Further, we restrict our attention to public health research and practice, excluding laboratory studies. Finally, we mention important ethical considerations and statistical methods only briefly; details of these topics can be found in “Research and Publication Ethics” and “Reporting Statistical Methods and Results” elsewhere in this issue of the Journal.


Guidelines for reporting public health investigation methods

Reporting guidelines have been developed for many types of public health investigations. Recognizing the proliferation of such guidelines, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Programme began in 2006 to collect and disseminate guidelines for reporting health research studies (Table S1) (2). We highlight example guidelines that are internationally recognized publishing standards and also include guidelines for reporting methods associated with fundamental public health investigations not classified as research (Table S1).

Disease outbreak investigations

Investigating an outbreak or unexpected increase in the incidence of disease cases or a condition for a geographic area or period is a fundamental public health activity. The primary purpose of an outbreak investigation is to identify the source of the pathogen, its transmission mode (route), and modifiable risk factors for illness so that the most appropriate control and prevention activities can be implemented (3). However, subsequent publication of a scientific report summarizing detection, investigation, and control of the outbreak is useful for disseminating knowledge of new risk factors, investigation techniques, and effective interventions.

The Public Health Agency of Canada has developed a guide for reporting the investigation and findings of disease outbreaks (4). Their guidance recommends reporting of overview or background data (dates of first case and of investigation initiation and conclusion), and methods for case finding and data collection, case investigation, epidemiologic and statistical analysis, and interventions. When a disease outbreak involves a particular setting (e.g., infections acquired in hospitals or related facilities), the reporting guidelines recommend describing the study design, participants, setting, interventions, details of any laboratory diagnosis of the pathogen by culturing and typing, health outcomes, economic outcomes, potential threats to validity, sample size, and statistical methods (5).

Public health surveillance activities

Public health surveillance—the continuous, systematic collection, analysis, and interpretation of health-related data needed to plan, implement, and evaluate public health practice—is another fundamental public health activity (6). Publishing the results of surveillance activities is an essential part of public health action; therefore, the methods section of a surveillance report should include the following:

  • Any legal mandates for data reporting;
  • The methods used for data collection;
  • The methods used for data transfer, management, and storage;
  • Relevant case definitions for confirmed, probable, and suspected cases;
  • The performance attributes of the surveillance system.

These performance attributes of a surveillance system are defined in the Surveillance Evaluation Guidelines, published by the Centers for Disease Control and Prevention (United States) (7) and in Principles and Practice of Public Health Surveillance (8).

Intervention and prevention program evaluation

Another fundamental public health activity is evaluating intervention and prevention programs. A framework for program evaluation developed by the US Centers for Disease Control and Prevention summarizes key elements of the activity, specifies steps in the process, provides standards for measuring effectiveness, and clarifies the purposes of program evaluation (9). Guidelines for writing the methods section of a program evaluation paper exist for different types of evaluation designs. For example, the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) checklist is useful for reporting evaluations of behavioral and public health interventions with nonrandomized designs (10). The TREND checklist includes advice regarding how the methods should describe the participants, interventions, objectives, outcomes, sample size, exposure assignment, blinding (masking) of investigators or participant exposure, unit of analysis, and statistical methods. In contrast, for reporting the evaluation of interventions to change behavior, recommendations from the Workgroup for Intervention Development and Evaluation Research (WIDER) are more appropriate (11).

When randomization assignment is impractical, evaluations use observational designs and collect information on variables needed at the analysis stage of the investigation to correct for selection bias. Guidelines for reporting observational evaluations consider the observation method, the intervention and expected outcome, study design, information regarding the sample, measurement instruments, data quality control, and analysis methods (12).

Another approach to evaluation is a mixed-method or realist model, a theory-driven evaluation method increasingly used for studying the implementation of complex interventions within health systems, particularly in low- and middle-income countries (13). Theory-driven evaluation describes the associations between activities and outputs and short- and long-term outcomes. Theory-driven evaluation also attempts to address the problem that evaluations using traditional methods (e.g., experimental and quasi-experimental methods) do not always deal with intervention complexity. For example, in evaluating the effectiveness of community health workers in achieving improved maternal and child health outcomes in Nigeria, researchers chose a theory-driven approach because of countrywide and community-specific factors affecting the outcomes (14). As for other types of evaluation, reporting standards for realist evaluations include describing the reasons for using the method, the environment surrounding the evaluation, the program evaluated, the evaluation design, the data collection methods, the recruitment and sampling, and any statistical analysis (15,16).

If economic factors are important in the evaluation, specific guidelines should be consulted (e.g., the Consolidated Health Economic Evaluation Reporting Standards or CHEERS) (17). In using these reporting guidelines, some fundamental approaches in evaluation become important. Three approaches are commonly used to establish the effectiveness of a program or intervention: (I) comparing participants in the program with nonparticipants; (II) comparing results from different evaluations, each of which used different methods; and (III) case studies of programs and outcomes. In the comparison approach, random assignment of persons, facilities, or communities can be used to minimize selection bias. However, random assignment can be impractical for interventions involving, for example, mass media programs designed to reach the entire population. In such cases, other methods of assignment to intervention or control groups can be used.

Blinding the investigators and participants to whether a participant is assigned to an intervention or control group can help to avoid measurement bias. A participant’s knowledge that he or she is receiving an intervention can affect the outcome (the Hawthorne effect). The potential for bias is even more likely if the outcome of interest is behavioral, rather than biologic.

Furthermore, measurers of the outcomes must also be blind to what the recipients received to avoid biases in measurement associated with the measurers’ expectations (i.e., double-blinding). However, in certain evaluations, double-blinding is impractical. For example, a breastfeeding mother will be aware that she and her infant are in the breastfeeding intervention group, and that knowledge can affect other aspects of her behavior toward her infant. In such studies, rather than using double-blinding, the investigator might develop placebo interventions that expose mothers to the same amount and intensity of an educational intervention, but on a subject unrelated to breastfeeding (18). Participants and evaluators can be blinded by keeping treatment and control groups physically separate so that members of each group are unaware of the other group’s activities. However, the two groups would then have different experiences with interventions, exposures, and outcomes, presenting challenges for standardizing measures and development of appropriate informed consent procedures.

Public health research

Historically, many of the guidelines for reporting were developed in the context of clinical research studies. One of the earliest of these guidelines is for randomized controlled trials (19). These guidelines and subsequent extensions (20) formed the basis of much of the guideline development discussed in this paper. For each of these research designs, reporting of the methods should discuss how human participants were protected from harm caused by ethics errors, including informed consent, and how participants’ privacy and confidentiality was protected. These methods are covered in more detail in “Research and Publication Ethics” elsewhere in this issue of the Journal.

Clinical case-series reports

A report on a single case of a series of clinical cases with point-of-care data provides evidence of the effectiveness of high-quality patient care and approaches to treating rare or unusual conditions. To help reduce reporting bias, increase transparency, and provide early signals of what interventions work, depending on patient characteristics and circumstances, an international group of experts developed the CARE guidelines for reporting case studies and case-series (21). Even if such a report does not have a designated methods section, it should include information regarding patient characteristics, clinical findings, timelines (e.g., timeline graphs or epidemiologic curves), diagnostic assessments, therapeutic interventions, and outcomes.

Qualitative research

Many health problems can be addressed by combining interdisciplinary quantitative and qualitative methods. In such investigations, qualitative methods can be helpful by providing information about the meaning of text, images, and experiences and how the context surrounding study participants and their environments influence the concepts and theories being studied. For example, to investigate maternal knowledge of and attitudes toward childhood vaccination in Haiti, researchers used focus group discussions, physician observation, and semi-structured interviews with health providers (22). The Standards for Reporting Qualitative Research (SRQR) (23) checklist includes a qualitative research model, researcher characteristics, context, sampling strategy, ethics protections, data collection (e.g., methods, instruments, and technology), units of study, data processing and analysis, and techniques to enhance trustworthiness. A later extension (Consolidated Criteria for Reporting Qualitative Research or COREQ) (24) further defined the evaluation domains and added information about the research team.

Cross-sectional surveys and disease registry studies

If a controlled experiment or a quasi-experimental study design is impractical, an observational study is the only research option. The STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) initiative recommends information that should be included in an accurate and complete report of any of three main observational study designs: cohort, case-control, and cross-sectional studies (25). For cross-sectional studies, surveys, and registry studies, the checklist includes study design, setting (including periods of recruitment), participants (methods of selection and exclusion criteria), variables (outcomes, exposures, confounders, and effect modifiers), data sources and measurements, efforts to address potential sources of bias, study size (including power calculation), quantitative variables and any groupings, and statistical methods (including handling of missing data and sensitivity analysis). The STROBE checklist does not include some important survey methods, such as nonresponse analysis, details of strategies used to increase response rates (e.g., multiple contacts or mode of contact of potential participants), and details of measurement methods (e.g., making the instrument available so that readers can consider questionnaire formatting, question framing, or choice of response categories) (26). Guidelines also have been developed for reporting the methods used for Internet-based surveys, the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) (27).

Use of routinely collected health data, obtained for administrative and clinical purposes rather than research, is increasing in public health. To respond, guidelines for reporting methods used in registry studies were developed in 2015: the REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) checklist (28).

Case-control studies

The case-control study design is useful for determining whether an exposure is directly associated with an outcome (i.e., a disease or condition of interest). This design is often used in public health because it is quick, inexpensive, and easy, compared with a cohort (follow-up) study design, making the case-control study particularly appropriate for (I) investigating outbreaks and (II) studying rare diseases or outcomes. The STROBE guidelines provide specific sections for reporting the methods used for case-control studies. The guidance includes attention to study design, setting, variables, data sources, bias, and study size. Specific to case-control studies is the necessity of reporting eligibility criteria, the sources and methods of case-patient ascertainment and control subject selection, and the rationale for the choice of case-patients versus control subjects. For matched case-control studies, reporting should include the matching criteria and the number of control subjects per case. In this study design, the methods must report how the statistical methods accounted for the matching.

A useful example of reporting methods for case-control studies can be found in “Reporting Participation in Case-Control Studies” by Olson et al. (readers should consider using the tables in that paper) (29). The effect size provides an estimate of the average difference, measured in standard deviation units so as to be scale independent, between a case’s score and the score of a randomly chosen member of the control population. Specific guidance for reporting effect sizes in case-control research is available (30).

Cohort studies

The general guidance of STROBE also is useful for reporting methods used in cohort studies. The adaptations for this study design include reporting methods for determining eligibility criteria, sources and methods of participant selection, and follow-up methods. For matched cohort studies, reporting should include the matching criteria used and the number of participants exposed and unexposed to the hypothesized cause of the outcome; in this case, the methods section also should describe how loss to follow-up was addressed. An example of reporting according to STROBE guidelines for cohort studies is available in Kunutsor et al.’s investigation of the association of baseline serum magnesium concentrations associated with a risk for incident fractures (31).

Systematic reviews and meta-analyses

Meta-analysis is a statistical procedure for combining data from multiple studies, obtained through a systematic review of the literature. This can be used to identify a common effect (when the treatment effect is consistent among studies), to identify reasons for variation, or to assess important group differences. Pharmaceutical companies use meta-analyses to gain approval for new drugs, with regulatory agencies sometimes requiring a meta-analysis as part of the approval process. Researchers use meta-analyses to determine which interventions work and which ones work best. Many journals encourage researchers to submit systematic reviews and meta-analyses that summarize the body of evidence regarding a specific question, and systematic reviews are replacing the traditional narrative review. Meta-analyses can play a key role in planning new studies by identifying unanswered questions. Finally, meta-analyses can be used in grant applications to justify the need for a new study.

The earliest guideline for reporting meta-analyses was for meta-analyses of randomized controlled trials (32) and specified reporting of searches, selection, validity assessment, data abstraction, study characteristics, and quantitative data synthesis, and in the results with “trial flow,” study characteristics, and quantitative data synthesis. Research documentation was identified for only 8 of 18 items. Subsequently, this guidance was revised and expanded in the PRISMA Statement (33).

With the proliferation of meta-analyses of observational studies, reporting guidance followed (34). The MOOSE Statement requires a quantitative summary of the data; the degree to which coding of data from the articles was specified and objective; an assessment of confounding, study quality, and heterogeneity; the statistical methods used; and display of results (e.g., forest plots). The PRISMA statement also includes recommendations useful for reporting meta-analyses of observational studies; thus, both checklists should be consulted. Several extensions of the PRISMA checklist are available for network meta-analyses, health equity, and complex interventions (35).


Summary

The methods section of a scientific article must persuade readers that the study design, data collection, and analysis were appropriate for answering the study question and that the results are accurate and trustworthy. The methods section is often written first and is the easiest section of the manuscript to write because it is often written before the study begins. Writers should report the methods that were actually used in addition to methods that were planned but not used. Regardless of the type of study conducted and type of document written, the methods section should include certain core descriptive elements: the study design, how participants were selected, the setting, the period of interest, the variables and their definitions used for analysis, the procedures or instruments used to measure exposures, outcomes, and their association, and the analyses that produced the data that answer the study question. These core elements are common across all study designs, and the specific requirements for each study type should be consulted during project planning and again when writing begins.

Care should be taken to ensure that all methods are described in the main methods section or in supplementary online material, and not included in the results or discussion. Careful attention to reporting of methods can assist journal peer reviewers and readers, as well as other researchers who might use the methods to replicate the study or use the results in a meta-analysis or systematic review.

Table S1

Selected guidelines for reporting the methods used in public health research and practice

Guideline Internet site Applicability Methods section should include
Public Health Agency of Canada/Outbreak Reporting Guide (4) https://www.canada.ca/en/public-health/services/reports-publications/canada-communicable-disease-report-ccdr/monthly-issue/2015-41/ccdr-volume-41-04-april-2-2015/ccdr-volume-41-04-april-2-2015-1.html. Includes a checklist and example of an epidemiologic curve Case reports after an investigation is complete; also useful for identifying emerging risks and describing new investigations and effective interventions • How the outbreak was detected
⟡ Beginning and ending dates
• What investigations were undertaken
⟡ Case finding procedures
⟡ Definitions for confirmed and suspected cases
⟡ Laboratory tests/environmental sampling performed
• What epidemiologic data were collected and analyzed
⟡ Risk factors, survival analysis, background rates
⟡ Analytic methods used, including computer software employed
⟡ Analyses and controls for interactions, confounding factors, missing data, and reporting delays
• What interventions were implemented to control it
⟡ Clinical and public health (e.g., exposure history, risk assessment, clinical treatments, or other public health measures)
Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) (10) https://www.cdc.gov/trendstatement/. Includes the TREND statement; or https://www.cdc.gov/trendstatement/pdf/trendstatement_TREND_Checklist.pdf. Includes the 22-item checklist Standards for nonrandomized evaluations of behavioral and public health interventions • Eligibility criteria for participants
• Recruitment method (e.g., referral or self-selection), including sampling method if applicable
• Recruitment setting
• Settings where the data were collected
• Intervention details for each study condition and how and when they were administered, including
⟡ What was administered
⟡ How the content was administered
⟡ How the subjects were grouped during delivery
⟡ Who delivered the intervention
⟡ Where the intervention was delivered
⟡ How many sessions, episodes, or events were intended to be delivered and how long they were to last
⟡ How long delivery of the intervention was intended to take for each unit
⟡ What were the activities used to increase compliance or adherence (e.g., incentives)
• Specific objectives and hypotheses
• Clearly defined primary and secondary outcome measures
• Methods used to collect data and any methods used to enhance the quality of measurements
• Information on validated instruments (e.g., psychometric and biometric properties)
• How sample size was determined and, when applicable, explanation of any interim analyses and stopping rules
• Unit of assignment: the unit being assigned to study condition (e.g., individual, group, community)
• Method used to assign units to study conditions, including details of any restriction (e.g., blocking, stratification, or minimization)
• Inclusion of aspects employed to help minimize potential bias induced because of non-randomization (e.g., matching)
• Whether participants, those administering the interventions, and those assessing the outcomes were blinded to study condition assignment; if so, statement regarding how the blinding was accomplished and how it was assessed
• Description of the smallest unit being analyzed to assess intervention effects (e.g., individual, group, or community)
• If the unit of analysis differs from the unit of assignment, the analytical method used to account for this (e.g., adjusting the standard error estimates by the design effect or using multilevel analysis)
• Statistical methods used to compare study groups for primary methods outcomes, including complex methods of correlated data
• Statistical methods used for additional analyses (e.g., a subgroup analyses or adjusted analysis)
• Methods for imputing missing data, if used
• Statistical software or programs used
Consolidated Health Economic Evaluation Reporting Standards (CHEERS) (17) http://www.equator-network.org/reporting-guidelines/cheers/; or https://www.ispor.org/ValueInHealth/ShowValueInHealth.aspx?issue=3D35FDBC-D569-431D-8C27-378B8F99EC67. Includes a 24-item checklist and extensive examples Economic evaluations of health interventions • Characteristics of the base-case population and groups analyzed, including why they were chosen
• Relevant aspects of the systems in which decisions needed to be made
• Perspective of the study and association with the evaluated costs
• Interventions or strategies compared and why they were chosen
• The time horizons over which costs and consequences were being evaluated and why appropriate
• Choice of discount rates used for costs and outcomes and why appropriate
• Health outcomes used as the measures of benefit in the evaluation and their relevance for the type of analysis performed
• Single-study–based estimates—design features of the single effectiveness study and why the single study was a sufficient source of clinical effectiveness data
• Synthesis-based estimates—methods used for identifying included studies and synthesis of clinical effectiveness data
• Population and methods used to elicit preferences for outcomes
• Single study-based economic evaluation—approaches used to estimate resource use associated with the alternative interventions; primary or secondary research methods for valuing each resource item in terms of its unit cost; adjustments made to approximate to opportunity costs
• Model-based economic evaluation—approaches and data sources used to estimate resource use associated with model health states; primary or secondary research methods for valuing each resource item in terms of its unit cost; any adjustments made to approximate to opportunity costs
• Dates of the estimated resource quantities and unit costs; methods for adjusting estimated unit costs to the year of reported costs, if necessary; and methods for converting costs into a common currency base and the exchange rate
• The specific type of decision-analytic model used; providing a figure of the model structure strongly recommended
• All structural or other assumptions underpinning the decision-analytic model
• All analytic methods supporting the evaluation (e.g., methods for dealing with skewed, missing, or censored data; extrapolation methods; methods for pooling data; approaches to validate or make adjustments to a model; and methods for handling population heterogeneity and uncertainty)
Consolidated Standards of Reporting Trials (CONSORT) (19) http://www.consort-statement.org/. Includes the CONSORT statement, checklist, flow diagram, and explanations; or http://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=0&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+CONSORT+extension&btn_submit=Search+Reporting+Guidelines. Includes the CONSORT extensions Reports of trial findings, facilitating their complete and transparent reporting, and aiding their critical appraisal and interpretation • Description of trial design (e.g., parallel or factorial), including allocation ratio
• Important changes to methods after trial commencement (e.g., eligibility criteria), with reasons
• Eligibility criteria for participants
• Settings and locations where the data were collected
• The interventions for each group with sufficient details to allow replication, including how and when each intervention was administered
• Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed
• Any changes to trial outcomes after the trial commenced, with reasons
• How sample size was determined
• When applicable, explanation of any interim analyses and stopping guidelines
• Method used to generate the random allocation sequence
• Type of randomization; details of any restriction (e.g., blocking and block size)
• Mechanism used to implement the random allocation sequence (e.g., sequentially numbered containers), describing any steps taken to conceal the sequence until interventions are assigned
• Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions
• If done, who was blinded after assignment to interventions (e.g., participants, care providers, or those assessing outcomes) and how
• If relevant, description of the similarity of interventions
• Statistical methods used to compare groups for primary and secondary outcomes
• Methods for additional analyses (e.g., secondary group analyses or adjusted analyses)
CAse REport Guidelines (CARE) (21) http://www.care-statement.org/. Includes the CARE statement, checklist, and extensive list of publications; or http://www.care-statement.org/resources/checklist. Includes the checklist Reporting cases of disease or injuries • Does not include a designated methods section, but does report
⟡ De-identified demographic information (e.g., age, sex, race/ethnicity, and occupation)
⟡ Main symptoms of the patient (his or her chief symptoms)
⟡ Medical, family, and psychosocial history, including diet, lifestyle, and genetic information whenever possible, and details about relevant comorbidities, including past interventions and their outcomes
⟡ Diagnostic methods (e.g., physical examination, laboratory testing, imaging, or questionnaires)
Standards for Reporting Qualitative Research (SRQR) (23) http://www.equator-network.org/reporting-guidelines/srqr/ Qualitative research studies • Qualitative approach (e.g., ethnography, grounded theory, or case study, phenomenology, or narrative research) and guiding theory if appropriate; identifying the research paradigm also recommended; rationale (i.e., justification for choosing the theory, approach, method, or technique; assumptions and limitations implicit in those choices; and how those choices influence study conclusions and transferability)
• Researchers’ characteristics that might influence the research, including personal attributes, qualifications/experience, relationship with participants, assumptions, or presuppositions; potential or actual interaction between researchers’ characteristics and the research questions, approach, methods, results, or transferability
• Setting or site and salient contextual factors; rationale
• How and why research participants, documents, or events were selected; criteria for deciding when no further sampling was necessary (e.g., sampling saturation); rationale
• Documentation of approval by a relevant ethics review board and participant consent, or explanation for lack thereof; other confidentiality and data security concerns and protections
• Types of data collected; details of data collection procedures including (as appropriate) start and stop dates of data collection and analysis, iterative process, triangulation of sources and methods, and modification of procedures in response to evolving study findings; rationale
• Description of instruments (e.g., interview guides or questionnaires) and devices (e.g., audio recorders) used for data collection; if and how the instruments changed over the course of the study
• Number and relevant characteristics of participants, documents, or events included in the study; level of participation (might be reported in the results section)
• Methods for processing data before and during analysis, including transcription, data entry, data management and security, verification of data integrity, data coding, and anonymization or de-identification of excerpts
• Process by which inferences or themes, and so forth, were identified and developed, including the researchers involved in data analysis; usually references a specific paradigm or approach; rationale
• Techniques to enhance trustworthiness and credibility of data analysis (e.g., member checking, audit trail, triangulation); rationale
Consolidated Criteria for Reporting Qualitative Research (COREQ) (24) http://www.equator-network.org/reporting-guidelines/coreq/ or https://academic.oup.com/intqhc/article/19/6/349/1791966/Consolidated-criteria-for-reporting-qualitative. Includes the 32-item checklist Qualitative research studies • The methodologic orientation that underpinned the study (e.g., grounded theory, discourse analysis, ethnography, phenomenology, or content analysis)
• Participant selection (e.g., purposive, convenience, consecutive, or snowball) and recruitment (e.g., face-to-face, telephone, mail, or e-mail)
• Number of participants and number refusing to participate or dropping out and reasons provided
• Data collection setting (e.g., home, clinic, or workplace) and presence of nonparticipants
• Characteristics of the sample population (e.g., demographic data)
• The questions, prompts, or guides provided to the data collectors; pilot-testing of the instrument, if applicable
• Number of repeat interviews conducted, if applicable
• Use of audio or visual recording to collect the data
• Field notes used
• Duration of the interviews or focus group
• Data saturation, if applicable
• Whether transcripts were returned to participants for comment or correction
• Number of data coders
• Description or graphic of the coding tree
• Identified themes and whether they were established in advance or derived from the data
• Software or programs used to manage the data
• Participant feedback on the findings, if applicable
STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) (25,36) https://www.strobe-statement.org/index.php?id=strobe-home. Includes the STROBE statement, publications, and news; or https://www.strobe-statement.org/index.php?id=available-checklists. Includes multiple checklists for applying the STROBE statement Epidemiologic observational studies, including cohort, case-control, and cross-sectional studies • The key elements of the study design
• The setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection
• Cohort study—eligibility criteria and the sources and methods of participant selection and methods of follow-up
• Case-control study—eligibility criteria and the sources and methods of case-patient ascertainment and control subject selection; rationale for the choice of case-patients and control subjects
• Cross-sectional study—eligibility criteria and the sources and methods of selection of participants
• Matched cohort studies—matching criteria and number of exposed and unexposed
• Matched case-control study—matching criteria and the number of control subjects per case-patient
• All outcomes, exposures, predictors, potential confounders, and effect modifiers; diagnostic criteria, if applicable
• For each variable of interest, sources of data and measurement details; comparability of assessment methods if more than one group included
• Any efforts to address potential sources of bias
• How the study size was derived
• How quantitative variables were handled in the analyses; if applicable, which groupings were chosen and why
• All statistical methods, including those used to control for confounding
• Any methods used to examine secondary groups and interactions
• How missing data were addressed
• For cohort study, how loss to follow-up was addressed
• For case-control study, if applicable, how matching of case-patients and control subjects was addressed
• For cross-sectional study, if applicable, analytical methods taking account of sampling strategy
• Sensitivity analyses, if applicable
Checklist for Reporting Results of Internet E-Surveys (CHERRIES) (27) http://www.jmir.org/2004/3/e34/; or http://www.jmir.org/article/viewFile/jmir_v6i3e34/2. Includes the CHERRIES checklist Web-based survey (e-mail, Internet, or intranet) • Might not include a designated methods section, but does report
⟡ The target population and sampling frame
⟡ Whether the study has been approved by an institutional review board
⟡ The informed consent process (e.g., length of time needed to take the survey, which data were stored and where and for how long, who the investigator was, and the purpose of the study)
⟡ What mechanisms were used to protect unauthorized access to any personal information that was collected or stored
⟡ How the survey was developed, including whether the usability and technical functionality of the electronic questionnaire had been tested before fielding the questionnaire
⟡ If the survey was open (i.e., open for each visitor of a site) or closed (i.e., only open to a sample population that the investigator knows (e.g., password-protected)
⟡ The type of e-survey (e.g., posted on an internet/intranet site or sent out through e-mail) and how the responses were captured
⟡ If the survey was mandatory (e.g., every visitor who wanted to enter the internet site) or voluntary)
⟡ If any incentives were offered (e.g., monetary or prizes) for completing the survey
⟡ The timeframe for data collection
⟡ If survey items were randomized or alternated to prevent biases
⟡ If adaptive questioning (i.e., certain items conditionally displayed on the basis of responses to other items) was used
⟡ The number of questionnaire items per page and over how many pages the questionnaire was distributed
⟡ Whether respondents were able to review and change their answers (e.g., through a back button or a review step)
⟡ How unique visitors were determined (e.g., on the basis of internet provider addresses or cookies or both)
⟡ Whether any methods (e.g., weighting of items or propensity scores) were used to adjust for the non-representative sample
REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) (28) http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001885#sec002; or http://journals.plos.org/plosmedicine/article/figure?id=10.1371/journal.pmed.1001885.t001. Includes the RECORD checklist Administrative and clinical purposes other than research and for reporting methods used in registry studies • The key elements of the study design
• The setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection
• The study population selection (e.g., codes or algorithms used to identify subjects); if not possible, provide an explanation
• References to any validation studies of the codes or algorithms used to select the population
• Links to databases and a flow diagram to illustrate the data linkage process
• Complete list of codes and algorithms used to classify exposures, outcomes, confounders, and effect modifiers; if not possible, provide an explanation
• For each variable of interest, sources of data and measurement details; comparability of assessment methods if more than one group included
• Any efforts to address potential sources of bias
• How the study size was derived
• How quantitative variables were handled in the analyses; if applicable, which groupings were chosen and why
• All statistical methods, including those used to control for confounding
• Any methods used to examine secondary groups and interactions
• How missing data were addressed
• Sensitivity analyses, if applicable
• A description of the extent to which investigators had access to the database population used to create the study population
• Information regarding the data cleaning methods
• A statement regarding whether the study included person-level, institution-level or other data linkage across ≥2 databases, including linkage and quality-evaluation methods used
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (33,37) http://www.equator-network.org/reporting-guidelines/prisma/; or http://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=0&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=PRISMA+extension&btn_submit=Search+Reporting+Guidelines. Includes the PRISMA extensions Systematic reviews and meta-analyses of observational studies or randomized controlled trials • If and where a study protocol can be accessed (e.g., an Internet address), and if available, study registration information, including registration number
• Study characteristics (e.g., length of follow-up) and report characteristics (e.g., years considered, language, or publication status) used as criteria for eligibility, giving rationale
• All information sources (e.g., databases with dates of coverage or contact with study authors to identify additional studies) in the search and date last searched
• Full electronic search strategy for at least one database, including any limits used, such that it can be repeated
• Process for selecting studies (i.e., screening, eligibility, included in systematic review, and if applicable, included in the meta-analysis)
• Method of data extraction from reports (e.g., piloted forms, independently, or in duplicate) and any processes for obtaining and confirming data from investigators
• All variables and their definitions for which data were sought (e.g., funding sources) and any assumptions and simplifications made
• Methods used for assessing risk of bias of individual studies, including specification of whether this was done at the study or outcome level, and how this information is to be used in any data synthesis
• The principal summary measures (e.g., risk ratio or difference in means)
• Methods of handling data and combining results of studies, if done, including measures of consistency for each meta-analysis
• Any assessment of risk of bias that might affect the cumulative evidence (e.g., publication bias or selective reporting within studies)
• Methods of additional analyses (e.g., sensitivity or secondary group analyses or meta-regression), if done, indicating which were pre-specified
Meta-analysis Of Observational Studies in Epidemiology (MOOSE) (34) http://jamanetwork.com/journals/jama/fullarticle/192614. Includes the checklist Meta-analyses of observational studies • Description of relevance or appropriateness of studies assembled for assessing the hypothesis to be tested
• Rationale for the selection and coding of data (e.g., sound clinical principles or convenience)
• Documentation of how data were classified and coded (e.g., multiple raters, blinding, and interrater reliability)
• Assessment of study quality, including blinding of quality assessors; stratification, or regression on possible predictors of study results
• Assessment of heterogeneity
• Description of statistical methods (e.g., compete description of fixed or random effects models, justification of whether the chosen models account for predictors of study results, dose-response models, or cumulative meta-analysis) in sufficient detail to be replicated
• Provision of appropriate tables and graphics

Acknowledgments

The authors would like to thank Tom Lang for very helpful comments on an earlier draft.

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the the Guest Editor (Thomas A. Lang) for the series “Publication and Public Health” published in Journal of Public Health and Emergency. The article has undergone external peer review.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/jphe.2017.12.01). The series “Publication and Public Health” was commissioned by the editorial office without any funding or sponsorship. The authors have no other conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Cohen J. AIDS vaccine trial produces disappointment and confusion. Science 2003;299:1290-1. [Crossref] [PubMed]
  2. EQUATOR Network. Enhancing the QUAlity and Transparency Of health Research. Available online: https://www.equator-network.org/
  3. Desenclos JC, Vaillant V, Delarocque Astagneau E, et al. Principles of an outbreak investigation in public health practice. Med Mal Infect 2007;37:77-94. [Crossref] [PubMed]
  4. Outbreak reporting guide -CCDR: Volume 41-04, April 2, 2015. Available online: https://www.canada.ca/en/public-health/services/reports-publications/canada-communicable-disease-report-ccdr/monthly-issue/2015-41/ccdr-volume-41-04-april-2-2015/ccdr-volume-41-04-april-2-2015-1.html
  5. Stone SP, Kibbler CC, Cookson BD, et al. The ORION statement: guidelines for transparent reporting of outbreak reports and intervention studies of nosocomial infection. Lancet Infect Dis 2007;7:282-8. [Crossref] [PubMed]
  6. World Health Organization. Health topics: public health surveillance. Geneva, Switzerland: WHO; 2017. Available online: http://www.who.int/topics/public_health_surveillance/en/
  7. German RR, Lee LM, Horan JM, et al. Updated guidelines for evaluating public health surveillance systems: recommendations from the Guidelines Working Group. MMWR Recomm Rep 2001;50:1-35; quiz CE1-7.
  8. Groseclose SL, German RB, Nsbuga P. Chapter 8. Evaluating public health surveillance. In: Lee LM, Teutsch SM, Thacker SB, et al. editors. Principles and practice of public health surveillance. 3rd ed. New York, NY: Oxford University Press, 2010:166-97.
  9. Centers for Disease Control and Prevention. A framework for program evaluation. Atlanta, GA: US Department of Health and Human Services, CDC; 2017. Available online: https://www.cdc.gov/eval/framework/index.htm
  10. Des Jarlais DC, Lyles C, Crepaz N, et al. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health 2004;94:361-6. [Crossref] [PubMed]
  11. Scott SD, Albrecht L, O’Leary K, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci 2012;7:70. [Crossref] [PubMed]
  12. Portell M, Anguera MT, Chacón-Moscoso S, et al. Guidelines for reporting evaluations based on observational methodology. Psicothema 2015;27:283-9. [PubMed]
  13. Pawson R, Tilley N. Realist evaluation. London: Sage Publications Limited, 1997.
  14. Mirzoev T, Etiaba E, Ebenso B, et al. Study protocol: realist evaluation of effectiveness and sustainability of a community health workers programme in improving maternal and child health in Nigeria. Implement Sci 2016;11:83. [Crossref] [PubMed]
  15. Wong G, Westhorp G, Manzano A, et al. RAMESES II reporting standards for realist evaluations. BMC Med 2016;14:96. [Crossref] [PubMed]
  16. University of Oxford/Nuffield Department of Primary Care Health Sciences. The RAMESES projects. Oxford, UK: University of Oxford, 2013. Available online: http://www.ramesesproject.org/Home_Page.php
  17. Husereau D, Drummond M, Petrou S. Consolidated health economic evaluation reporting standards (CHEERS)—explanation and elaboration: a report of the ISPOR Health Economic Evaluation Publication Guidelines Good Reporting Practices Task Force. Value Health 2013;16:231-50. [Crossref] [PubMed]
  18. Kramer MS, Guo T, Platt RW, et al. Infant growth and health outcomes associated with 3 compared with 6 mo of exclusive breastfeeding. Am J Clin Nutr 2003;78:291-5. [PubMed]
  19. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001;357:1191-4. [Crossref] [PubMed]
  20. The CONSORT Group. The CONSORT extensions. Ottawa, ON: The CONSORT Group. Available online: http://www.consort-statement.org/extensions
  21. The CARE Group. CARE case report guidelines. Oxford, UK: The Care Group. Available online: http://www.care-statement.org/
  22. Hudelson PM. Qualitative research for health programmes. Geneva, Switzerland: World Health Organization, 1994. Available online: http://apps.who.int/iris/bitstream/10665/62315/1/WHO_MNH_PSF_94.3.pdf
  23. O’Brien BC, Harris IB, Beckman TJ, et al. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med 2014;89:1245-51. [Crossref] [PubMed]
  24. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007;19:349-57. [Crossref] [PubMed]
  25. von Elm E, Altman DG, Egger M, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med 2007;4:e296 [Crossref] [PubMed]
  26. Bennett C, Khangura S, Brehaut JC, et al. Reporting guidelines for survey research: an analysis of published guidance and reporting practices. PLoS Med 2010;8:e1001069 [Crossref] [PubMed]
  27. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004;6:e34 [Crossref] [PubMed]
  28. Benchimol EI, Smeeth L, Guttmann A, et al. The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) Statement. PLoS Med 2015;12:e1001885 [Crossref] [PubMed]
  29. Olson SH, Voigt LF, Begg CB, et al. Reporting participation in case-control studies. Epidemiology 2002;13:123-6. [Crossref] [PubMed]
  30. Crawford JR, Garthwaite PH, Porter S. Point and interval estimates of effect sizes for the case-controls design in neuropsychology: rationale, methods, implementations, and proposed reporting standards. Cogn Neuropsychol 2010;27:245-60. [Crossref] [PubMed]
  31. Kunutsor SK, Whitehouse MR, Blom AW, et al. Low serum magnesium levels are associated with increased risk of fractures: a long-term prospective cohort study. Eur J Epidemiol 2017;32:593-603. [Crossref] [PubMed]
  32. Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta-analyses. Lancet 1999;354:1896-900. [Crossref] [PubMed]
  33. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009;6:e1000100 [Crossref] [PubMed]
  34. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA 2000;283:2008-12. [Crossref] [PubMed]
  35. University of Oxford. PRISMA: transparent reporting of systematic reviews and meta-analyses; extensions. Oxford, UK: University of Oxford; 2015. Available online: http://www.prisma-statement.org/Extensions/Default.aspx
  36. STROE Statement—checklist of items that should be included in reports of observational studies. Available online: https://www.strobe-statement.org/fileadmin/Strobe/uploads/checklists/STROBE_checklist_v4_combined.pdf
  37. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6:e1000097 [Crossref] [PubMed]
doi: 10.21037/jphe.2017.12.01
Cite this article as: Stroup DF, Smith CK, Truman BI. Reporting the methods used in public health research and practice. J Public Health Emerg 2017;1:89.

Download Citation