• Print this Page

Element 8: Evaluating CWHP Efforts

Collect data

Data collection tips:

  • The timing of data collection is critical. For programs targeting individual behaviour change, evaluating adherence over time is an important component of measuring impact. Most experience suggests that data should be collected twice after implementation – at six months and 12 months.  Organizational change lags behind and some effects may not be seen for several years.
  • There are ethical issues associated with some evaluation designs and methods, particularly if using previously collected data such as insurance claims or attendance records that were not collected for the programmatic purpose.  Similarly, some employees may be hesitant to provide personal data if they think it might be used against them in the future when promotions are being considered.
  • When determining the evaluation instruments for data collection, first look for those that already exist, have been validated and have been found to be reliable.  Ensure the information being sought through the tool will be accessible to the workplace being studied.

It’s not easy to undertake workplace health evaluation effectively.  Workplaces are diverse in terms of the demographics of the workforce, the complexities of the organizational structure, size and the mix of issues and programs underway. This makes it challenging to undertake conventional health promotion evaluation strategies.  This can be compounded if the committee is trying to evaluate the impact across settings – from the workplace into the community and to the employee’s home.  One way to alleviate some of the challenges is to utilize a Participatory Action Research approach.  This engages the participants from the diverse population of the workplace in shaping the evaluation plan – from setting the research questions to identifying the most appropriate means of collecting data to analyzing results to communicating the results.

Communicating evaluation results

The data has been collected and analyzed appropriately.  It’s now time to interpret the results and to communicate these to the various stakeholders.

Interpret and disseminate results

The format of the evaluation results will depend on the audience who receives it.  The Kellogg’s Foundation offers the following suggestions in their Evaluation Toolkit[1]:

Tips on Writing Effective Reports
  • Know who your audience is and what information they need.
  • Relate evaluation information to decisions.
  • Start with the most important information.
  • Develop concise reports by writing a clear abstract and starting each paragraph with the most important point.
  • Write short focused paragraphs.
  • Highlight important points.
  • Do not use professional jargon or difficult vocabulary.
  • Use active verbs.
  • Have your report edited, looking for unnecessary words and phrases.

http://www.wkkf.org/Default.aspx?tabid=90&CID=281&ItemID=2810045&NID=2820045&LanguageID=0

  • Be creative and innovative in reporting evaluation findings. Use a variety of techniques such as visual displays, oral presentations, summary statements, interim reports and informal conversations.
  • Write and disseminate a complete evaluation report, including an executive summary and appropriate technical appendices.
  • Write separate executive summaries and popular articles using evaluation findings, targeted at specific audiences or stakeholder groups.
  • Write a carefully worded press release and have a prestigious office or public figure deliver it to the media.
  • Hold a press conference in conjunction with the press release.
  • Make oral presentations to select groups. Include demonstration exercises that actively involve participants in analysis and interpretations.
  • Construct professionally designed graphics, charts, and displays for use in reporting sessions.
  • Make a short video or audiotape presenting the results, for use in analysis sessions and discussions.
  • Stage a debate or advocate-adversary analysis of the findings in which opposing points of view can be fully aired.

Below is a typical outline for the presentation of evaluation results:

  1. Title
  2. Executive summary
  3. Introduction
  4. Program rationale and logic
  5. Description of the initiative/program
  6. Evaluation methods
  7. Research/findings
  8. Discussion
  9. Conclusion
  10. Acknowledgements
  11. References
  12. Appendices

Best practices for workplace mental health promotion evaluation

The World Health Organization offers the following best practices (adapted for mental health promotion) for the monitoring and evaluation of workplace health promotion programs[2]:

  1. Include outcome indicators on each level of formative, process, intermediate and impact evaluation.
  2. Outcome indicators should directly be related to and be dependent on intervention components and objectives. They should be a logical consequence of decisions made in each level of evaluation.
  3. An extensive process evaluation should always be included; qualitative information on program preparation and implementation will provide useful information on how to make the programs more successful.
  4. A measure to quantify the inside (worksite) or outside environment should be included. An increasing number of such instruments are currently being developed for research purposes.
  5. Use of validated and shorter Internet or Intranet questionnaires to decrease improve data management and to increase the response rate of subjects.
  6. Use innovative objective instruments to monitor and evaluate.
  7. The inclusion of an extensive and expensive set of biological indicators may not be necessary or feasible. In correspondence with program components and objectives, a relatively small set of feasible and less expensive biological indicators might be sufficient in practice.
  8. In combination with the continuous monitoring of sick leave at most worksites, regular (yearly) health check-ups of employees should likewise be incorporated in company health policy. Including the recommended small set of biological indicators and/or a questionnaire into a yearly check-up could:
    • give insight into long term in health changes;
    • automatically measure health changes due to newly implemented policy or intervention elements;
    • provide insight into efficacy of interventions so that health trends can be anticipated;
    • be utilised as a bench-mark of company`s health policy;
    • provide continuous data flow making cost-benefit analysis achievable;
    • contribute to the employee’s perception of the commitment of the company to occupational health management.

Information:

Tools:

  • Factors to Consider When Deciding on an Evaluation Type (link to the tables below). This tool created by THCU will help determining which type of evaluation is best suited for a particular program.
  • W.K. Kellogg Foundation Evaluation Handbook this handbook provides a framework for thinking about evaluation as a relevant and useful program tool. http://www.wkkf.org/knowledge-center/Resources-Page.aspx?q=evaluation

Factors to Consider When Deciding on an Evaluation Type

Table 1 Commonly Used Qualitative Methods:

  Description Purpose Strengths Limitations
Focus groups
  • A semi-structured discussion with 8 to 12 stakeholders
  • Lead by a facilitator who follows an outline and manages group dynamics
  • Proceedings are recorded
  • To gather in-depth information from a small number of stakeholders
  • To pre-test materials with a target audience
  • Develop a better understanding of stakeholder attitudes, opinions, and language
  • Often used to prepare for a survey
  • Provides in-depth information
  • Implementation and analysis requires a minimum of specialized skills
  • Can be inexpensive to implement
  • Participants influence each other
  • Subjective
  • Potential for facilitator bias
  • Can be difficult to analyze
  • Results are not quantifiable to a population
In-depth interviews
  • Telephone or in-person one-on-one interviews
  • Interviewer follows an outline but has flexibility
  • Usually 10 to 40 are completed per ‘type’ of respondent
  • To investigate sensitive issues with a small number of stakeholders
  • To develop a better understanding of stakeholder attitudes, opinions, and language
  • Provides a confidential environment
  • Eliminates peer influence
  • Opportunity for interviewer to explore unexpected issues
  • Provides more detailed information than focus groups
  • More expensive to implement and analyze than focus groups
  • Potential for interviewer bias
  • Can be difficult to analyze
  • Results are usually not quantifiable to a population
Open-ended survey questions
  • Structured questions on a telephone or e-mail survey that allow the respondent to provide a complete answer in their own words
  • To add depth to survey results
  • To further explore the reasons for answers to close-ended questions for exploratory questions
  • Adds depth to quantitative data
  • Generalizable to population
  • Time-consuming to analyze properly
  • Adds considerable time to the survey
  • Not flexible
Diaries
  • Detailed account of aspects of a program
  • On-going documentation by one or more stakeholders
  • Used primarily for process evaluation
  • Puts other evaluation results in context
  • Captures information often not thought of before
  • Very inexpensive to collect
  • Can be difficult and expensive to analyze
  • Observations are subjective

Table 2 Commonly Used Quantitative Methods

  Description Purpose Strengths Limitations
Surveys
  • Completion of structured questionnaire by many stakeholders within a relatively short time frame
  • Can be completed by telephone, mail, fax, or in-person
  • To collect feedback that is quantifiable and generalizable to an entire population
  • Results are generalizable to an entire population
  • Standardized, structured questionnaire minimizes interviewer bias
  • Tremendous volume of information collected in short period of time
  • Rarely provides comprehensive understanding of respondent’s perspective
  • Can be very expensive
  • Requires some statistical knowledge and other specialized skills to process and interpret results
Process tracking forms/records
  • Collection of process measures in a standardized manner
  • Usually incorporated into a program routine
  • To document the process of a program
  • To identify areas for improvement
  • Can be incorporated into normal routine
  • Fairly straightforward to design and use
  • Can provide very accurate, detailed process information
  • Can be seen as extra burden on staff/volunteers
  • Risk that forms/records will not be completed regularly or accurately
Large data sets
  • Accessing existing sources of research data for information about your population of interest
  • To position your program within a broader context
  • To monitor trends in your population of interest
  • Can be expensive or free to access
  • Can provide accurate, well-researched information
  • Can lead to networking and information-sharing opportunities
  • Minimal usefulness for evaluating your program
  • Can be difficult to relate specifically to your program

[1] Kellogg’s Foundation, “Evaluation Handbook,” http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx (accessed December 29, 2009).

[2] Luuk Engbers. “Monitoring and Evaluation of Worksite Health Promotion Programs – Current state of knowledge and implications for practice,” World Health Organization (2008) http://www.who.int/dietphysicalactivity/Engbers-monitoringevaluation.pdf (accessed December 29, 2009).