Skip to main content

Introduction

There has been a recent growth of practitioner activity and scientific research examining interventions to manage and prevent work-related stress and psychosocial risks at work. The evaluation of such programmes and strategies is of central importance to understand: are these practical solutions effective; and if so, why and how? This article will begin by outlining and describing the various facets of intervention evaluation and providing a reflective discussion of the theoretical and pragmatic challenges associated with evaluation from both a scientific and organisational perspective.

Intervention: an introduction and background

Work-related psychosocial risks have been identified as one of the major contemporary challenges for occupational health and safety, and are linked to such workplace problems as: work-related stress, workplace violence and harassment. There has been in recent years a growing movement at a European, national and organisational level to develop measures to effectively manage and prevent psychosocial risks [1][2][3][4].

Broadly, psychosocial risk management interventions are designed to target the source of, response to, or effects of work-related psychosocial risks and work-related stress. Traditionally, psychosocial risk management interventions have been distinguished in organisational and task/job levels and individual orientations, and, more recently, in policy/legislative orientations [5][6]. Consequently, interventions for work-related stress can be targeted at both the enterprise level and at the policy/legislative level. It is important to highlight that the current article primarily focuses its discussion on methodological and pragmatic issues pertaining to intervention evaluation at the enterprise level. However, a more commonly used distinction is made between the stages of prevention and their associated targets of change, namely: primary, secondary and tertiary level interventions. Primary interventions attempt to tackle the source of the work-related problem or stressor. Secondary interventions attempt to strengthen employees’ ability to cope with exposure to these stressors, whilst tertiary interventions offer remedial support for the problems that have already been caused by work-related stress [6].

The article on “Interventions for Health and Wellbeing" provides a more in-depth discussion of these different types/ categories of interventions, with a concentrated focus on issues surrounding their design and implementation. In addition, the article entitled “Managing psychosocial risks: Drivers and barriers" provides a comprehensive discussion of some of the key implementation and procedural challenges associated with psychosocial risk management. The current article will not replicate these discussions, but, in contrast, will aim to provide a concise overview and discussion of the theoretical, empirical, and pragmatic considerations in relation to the evaluation of interventions for health and wellbeing in the workplace.

Importance of intervention evaluation

There is increasing government guidance and legislation that requires organisations to assess and, in turn, manage work-related stress and psychological wellbeing [7]. Increasingly, employers want to know: what can I do to manage these issues and what will work? Therefore, the evaluation of intervention is of central importance to both researchers and practitioners in order to further develop a collective knowledge of what needs to be done to effectively tackle and manage the issue of work-related stress and psychosocial risks in the workplace.

However, despite the burgeoning literature and overall growth of practitioner activity in the domain of psychosocial risk management [8], the relative effectiveness of such programmes and associated measures has been difficult to assess and determine [7][9][10]. This is, in part, due to pervasive methodological deficiencies found within the relevant research and the lack of adequate systematic evaluations [10][11][12][13].

In 1979, Newman and Beehr [14] conducted one of the first comprehensive reviews of personal and organizational strategies for handling job stress. The results of this early review concluded that the effectiveness of strategies could not be accurately assessed, because methodologically reliable evaluative research was lacking. A review by van der Hek and Plomp in 1997 [11] reviewed 342 scientific papers on stress management interventions and found that only a small proportion of intervention articles (37 respectively) referred to some kind of evaluation, of which seven were ‘evaluated’ based solely on anecdotal evidence. Many proponents and experts in the field have critiqued the knowledge base in stress prevention and management as unsatisfactory and, moreover, ‘piecemeal’ in nature [8][13]; therefore, highlighting the imperative need for more evidence-based measures and prevention solutions in the field, and, in turn, the importance of more evaluative intervention research. Through evaluative intervention research, this can provide an enhanced evidence-base and theoretical foundation for researchers, practitioners and policy makers to empirically and pragmatically understand:

  • Which programme types and components are effective, and which are not?
  • Why do certain components work, and what are the mechanisms that are involved?
  • Which are intended and unintended side effects?
  • What are the cost and benefits of implemented practical solutions for companies?
  • What are the stimulating and obstructing factors to the successful design and implementation of interventions?

3. Evaluating Intervention: methodological challenges and considerations

The following sections aim to provide the reader with a concise overview of some of the key methodological issues and considerations that relate to evaluating the effectiveness of interventions for work-related psychosocial issues and stress.

Evaluative intervention research design

Pre and post design

An intuitively simple way of determining whether an intervention has been effective is to assess a group of people prior to (pre-intervention) and again after it (post-intervention). In order to assess whether the intervention made a measurable change, measurements are taken prior to and following the intervention and then are statistically compared. If a positive significant change is observed, this would be viewed as an indication of the effectiveness of the intervention. This research design is referred to as a one-group pre-intervention versus post intervention design [15]. See Figure 1 for a graphical representation of this evaluation design.

Figure 1. Graphical representation of a one-group pre-intervention versus post intervention design
Figure 1. Graphical representation of a one-group pre-intervention versus post intervention design [15]
Many practitioners typically use a pre and post intervention design [7]. Although this research design is attractive due to its simplicity, it poses a key methodological limitation that confines the ability to answer the question: was the intervention effective? More specifically, this research design does not use a control or comparison group. It is interesting to note that the vast majority of interventions for occupational stress generally do not use a comparison or control group [9][16].

Why is a control/ comparison group important? Without the use of a control or comparison group, it is not possible to rule out alternative explanations for observed changes through the design of the study alone. For example, it may be difficult to ascertain whether the observed change was due to the intervention, or was due to concurrent changes in the organisation (such as a merger or downsizing), or other changes that might have an impact on the intervention, such as: the Hawthorn effect. The “Hawthorn effect" [17] is a form of reactivity whereby subjects improve or modify their behaviour, which is being experimentally measured, in response to the fact that they know they are being studied/watched, and not in response to any particular manipulation/ intervention. Within the context of interventions, without information from a control/comparison group, it is not possible to ascertain whether changes observed following the intervention are the result of the intervention or due to other concurrent changes in the organisation or beyond (e.g., “Hawthorn effect").

Randomised control trial: the gold standard?

A classic example of an experimental research intervention design is a Randomised Control Trial (RCT). RCTs are often viewed by the scientific community as the ‘gold standard’ in intervention research design [17][18]. The key features of an RCT research design are: the use of control group and intervention group, and the use of randomisation (i.e., randomly allocating participants to an intervention group or control group). See Figure 2 for a graphical representation of this research design. The use of randomisation has a methodological advantage in that this aids in preventing a selection bias in the sample: i.e, participants cannot be self-selected into the intervention/ comparison group. Although, there are clear methodological advantages to using a randomisation process, it is important to consider that organisations are not laboratories; and, therefore, randomly allocating workers to either an intervention or control group may raise both practical, but also ethical, considerations and concerns. Firstly, is it practical, or even possible, to randomly allocate workers sharing a workplace to either experimental group, whilst keeping these groups distinct and separate (in order to avoid contamination effects between the groups)? However, more broadly, when considering the use of control group (whether randomised or not) it is important to consider: “is it ethical, if using an intervention and non-intervention group, to randomly allocate some workers to receive an intervention, whilst others are denied intervention support?"

Figure 2. Graphical representation of a Randomised Control Trail intervention design
Figure 2. Graphical representation of a Randomised Control Trail intervention design [15]
Bearing this in mind, although experimental designs, such as RCTs, have clear methodological advantages and can, therefore, yield the highest degree of causal inference, a recent discussion within the literature has emerged postulating that this traditional scientific paradigm may be ill-suited for the evaluation of organisational-level interventions [18]. Indeed, this position argues that organisations and organisational life are complex, dynamic and ever-changing and, thus, do not adequately facilitate the tenets of the natural science paradigm [8][18][19]. Indeed, Ovretweit (pp. 99) [20] suggests “Traditional experimental evaluation design is not well suited to investigating social systems or the complex way in which interventions work with subjects or their environment". Bearing this in mind, a broader framework for evaluating interventions, in particular in relation to organisational level interventions, is recommended; and, in so doing, may yield a greater breadth and wealth of information regarding the effectiveness of these types of interventions [18]. The use of a quasi-experimental research design may be a useful alternative for evaluating interventions in organisations; and, in particular, in organisational level interventions.

Quasi-experimental research design

If experimental intervention research designs are not well suited to evaluating organisational level interventions: what is the alternative? Quasi-experimental research designs do not, in contrast, use random allocation of individual participants to control/comparison and intervention groups (See Figure 3). Since within an organisational context, as discussed above in relation to RCTs, this is generally not possible and potentially unethical. For example, one department may be assigned to the intervention group and another department assigned to a control group (may receive the intervention at a later date – a ‘waiting list’ group). In these designs the researcher retains a certain degree of control, thus aiding in drawing causal inferences. More specifically, if there are links between exposure to the intervention and change in measures of occupational health, then it may be the intervention is driving those changes [7]. It is important to note that there are a number of quasi-experimental designs, each with their own methodological strengths and weaknesses.

 
Figure 3. Quasi-experimental design
Figure 3. Quasi-experimental design [7]

Outcome measures for measuring effectiveness

Considering the role and importance of outcome measures in intervention, evaluation is of central importance, as judgements about the effectiveness of an intervention depend on the criteria used to evaluate it. For example, consider an intervention that is intended to decrease absence levels, which it does not, but it does increase job satisfaction. Does this make the intervention a failure? Bearing this in mind, prominent authorities in the field emphasise the need for evaluations to include a wide variety of outcome measures, including: subjective and objective measures of both individual (e.g., employee satisfaction, job stressors, performance and health status), and organisational level variables (e.g., absenteeism; [6][12][21]. The most frequently used measures for intervention outcomes can be grouped into five distinct categories: psychological measures, physiological measures, behavioural measures, measures of physical health/disease and organisational measures.

  • Psychological outcome variables include measures of psychological health (including anxiety, depression and burnout), as well as measures of mood and attitudes (such as, motivation, job satisfaction, intention to quit the organisation and self-efficacy). These types of measures are usually measured using self-report questionnaires. It is also important to note that measures of perceived work characteristics are also used for interventions and are often used twice: firstly to identify a problem and secondly to evaluate whether the intervention has been effective.
  • Physiological outcome measures include blood pressure, blood hormone levels, skin responses and muscular tension. Due to the practicalities of gathering this type of data, as well as the ethical complications that may arise as a result, these measures are less commonly used in intervention research.
  • Measures of physical health are usually collected through the use of a self-report questionnaire, such as: self-report health [7].
  • Behavioural outcome measures include exercise levels, sleeping patterns and coping behaviours that are hazardous to health, such as alcohol consumption and smoking. Finally,
  • Organisational outcomes include various measures, such as: absence levels, employee turnover, job performance and accidents or ‘near miss’ accidents [7].

Given the wide range of potential and possible outcome measures that can be used, this may lead many organisations and practitioners to wonder: what measure should I use? The theory underpinning, or the primary aim/target of an intervention usually informs the measure or series of measures selected [7]. For example, if an intervention has been designed to tackle problems with burnout at work, a measure of employee burnout at work would be included in the study. It must be noted that the vast majority of the measures previously outlined and discussed aim to identify and detect problems, rather than measure positive aspects of wellbeing, which might be viewed as a weakness in the current literature and knowledge base [7].

Follow-up periods

The use of follow up assessment can help to identify when the effect of the intervention has become apparent, and if that effect is maintained and sustained over the long term [7]. In general, occupational stress intervention evaluation lags have been criticised as being too short [12]. Van der Klink and colleagues [22] conducted a review of 48 intervention studies and observed the average length of post-intervention assessment was 9 weeks for interventions with a focus on the individual and 38 weeks for interventions with an organisational focus; both shorter that than the recommend follow-up period: 12 weeks (individual-level focused interventions) [22] and 2 years (organisational level interventions) [23]. There is no sound reason to conclude that all outcome measures of wellbeing and performance will demonstrate significant changes at the same rate or after a specific time following the intervention. This may be particularly true for organisational level interventions where a reduction/ elimination of exposure to sources of stress may take a significant period to demonstrate a measurable difference in employee self-reported wellbeing or in organisational level outcomes. Therefore, the use of long term follow up is important, and should be viewed as a central consideration of intervention evaluation.

Moving beyond solely effect evaluation: intervention process evaluation

“Unfortunately, studies of job stress interventions have, by and large, focussed on the what and why (i.e., the content) to the exclusion of the how (i.e., the process)" (pp.340) [21]. In general, the evidence for primary interventions has been traditionally and, to a degree, continues to be mixed and weak in nature. Typically, much of intervention evaluations exclusively focus on effect evaluation, with limited attention of the mechanisms and process issues driving or underpinning the change process. Therefore, typically the reason attributed to the observed negative or small intervention effects is a failure of theory/intervention [24]: i.e., it was a bad or ineffective intervention. In fact, only a limited number of intervention studies aim to distinguish between whether the observed small/ negative intervention effects were the results of a failure of theory, or failure due to poor intervention implementation [25]. In short, due to there being a tendency for research to focus only on examining the effects of interventions (effect evaluation), the reasons for the failure and success of interventions are often poorly understood [7].

Process evaluation focuses on evaluating the mechanisms of change, rather than the outcome of change (effect evaluation) [7][26][27]. Why is process evaluation important? A deeper understanding of the mechanism of change may aid in a better understanding of the inconsistencies in the outcomes of interventions, and may aid in answering the question: if the intervention worked here, why did it not work there? [7]. A study by Nielsen and colleagues [26] examined longitudinal data with added process measures from 11 intervention projects in Denmark. This study found that participants’ appraisal of the intervention activities were found to fully mediate the relationship between exposure to intervention and the outcome measures. Consequently, a growing body of research suggests that how a participant appraises the intervention and how it is implemented explains a large proportion of the observed intervention outcomes [26][28]. Therefore, process evaluation is instrumental in distinguishing between implementation failures and failures in theory [26][27]. In addition, the integration of process evaluation into intervention evaluation can be useful to strengthen studies that do not have a control group, by identifying differences in intervention exposure within an intervention group that can be used to shape the analysis of intervention outcomes [7].

It is clear that process evaluation is important as part of a comprehensive evaluation framework, but this begs the question: how do I conduct process evaluation? Process evaluation involves collecting information on the intervention activities and the context within which it was conducted. This data can come from various sources: organisational records, the perceptions of the recipients, and the accounts of those delivering the intervention [7]. Combining process evaluation with effect evaluation can aid in strengthening the validity of intervention research findings.

Economic evaluation

An analysis of the cost effectiveness of interventions should be an integral component of intervention[11]. However, the evaluation of the cost effectiveness of interventions has been neglected within practice and research [8]. A recent review of stress interventions by LaMontagne [16] found only a tenth of studies reviewed reported some form of economic evaluation, emphasising cost-benefit analysis as a research priority and as a current gap in the literature. Indeed, this evaluative information is critical in order to encourage organisations to move beyond occupational safety and health (OSH) compliance into best practice. It is important to note that several self-assessment guides and tools have been developed to help organisations obtain a better understanding on the estimated financial cost of workplace stress to them [29][30][31][32]. From an organisational perspective, these developed tools and guidance may be useful to help economically assess and monitor the intervention with an aim to quantify its monetary impact.

Conclusion and summary

The current article has aimed to provide a methodological driven discussion on approaches, techniques and considerations whilst evaluating interventions in organisations for work-related stress and associated psychosocial issues. Much of the discussion has primarily focused on answering the questions: did the intervention work (effect evaluation) and why/ how did it work (process evaluation). However, it is also important to note that the results of intervention evaluation can be used, and arguably should be used, to review action plans and inform how implemented strategies can be further improved and refined: i.e., what worked this time, what didn’t work, and how can we make it better/ more effective next time? From an organisational perspective, this review process is intrinsically linked to the utilisation of a continuous improvement cycle, and can aid in facilitating continuous organisational learning and development.

References

[1] EUROFOUND – European Foundation for Improving the Living and Working Conditions, ‘Fourth European Working Conditions Survey’, Luxembourg, Office for Official Publications of the European Communities, 1996.

[2] WHO – World Health Organisation, ‘Work Organization and Stress: Protecting Worker’s Health Series’, Geneva, World Health Organisation, 2003.

[3] WHO- World Health Organization, ‘WHO Healthy workplace framework and model: background and supporting literature and practices’, WHO, Geneva, 2010.

[4] ILO – International Labour Organisation, Global strategy on Occupational Safety and Health, Geneva, International Labour Organisation, 2004.

[5] Murphy, L.R., & Sauter, S., ‘Work organization interventions: Stage of knowledge and future directions’, Social and Preventative Medicine, Vol. 49, No. 2, 2004, pp. 79-86.

[6] Leka, S., Vartia, M., Hassard, J., Pahkin, K., Sutela., S., Cox, T., & Lindström, K., ‘Best Practice in Work-related Stress and Workplace Violence & Bullying Interventions’, In S. Leka (Ed), PRIMA-EF, Nottingham, IWHO publications, 2010, pp. 140 -173.

[7] Randall, R., & Nielsen, K., ‘Interventions to promote well-being at work’, In S. Leka & J. Houdmont (Ed.), Occupational Health Psychology, West Sussex, United Kingdom, John Wiley & Sons Ltd, 2010, pp. 88-123.

[8] Kompier, M.A.J., & Kristensen, T.S., ‘Organizational work stress interventions in a theortical, methodological, and practical context’, In J. Dunham (Ed.) Stress in the Workplace: Past, present, and future, London, Whurr, 2001, pp. 165-190.

[9] Cox, T., ‘Stress research and stress management: putting theory to work’, Sudbury, Health and Safety Executive Books, 1993.

[10] Cox, T., Griffiths, A., & Rial-Gonzalez, R., ‘Research on work-related stress’, Luxembourg, Office for Official Publications of the European Communities, 2000.

[11] Van der Hek, H., & Plomp,H.N., ‘Occupational stress management programmes: A practical overview of published effect studies’, Occupational Medicine, Vol. 47, 1997, pp. 133-141.

[12] Semmer, N.K., ‘Job stress interventions: Targets for change and strategies for attaining them’. In J.C. Quick and L.E. Tetrick (Eds), Handbook of Occupational Health Psychology, Washington, American Psychology Association, 2003, pp. 325-354.

[13] Briner, R.B., & Reynolds, S., ‘The costs, benefits, and limitations of organizational level stress interventions’, Journal of Organizational Behavior, Vol. 20, No. 5, 1999, pp. 647-664

[14] Newman, J.E., & Beehr , T.A., ‘Personal and organizational strategies for handling job stress: a review of research and opinion’, Personnel Psychology, Vol. 32, 1979, pp. 1-43.

[15] Katz, M.H., Evaluating Clinical and Public Health Interventions: A Practical Guide to Study Design and Statistics, Cambridge, Cambridge University Press, 2010.

[16] LaMontagne, A.D., Keegel, T., Louie, A.M.L., Ostry, A., Landsbergis, P.A., ‘A systematic reiew of the job stress intervention evaluation literature: 1995 -2005’, International Journal of Occupational Environmental Health, Vol. 13, 2007, pp. 268-280.

[17] McCarney, R., Warner, J., Iliffe, S., van Haselen, R., Griffin, M., & Fisher, P., ‘’The Hawthorne Effect: a randomised, controlled trial’, BMC Medical Researth Methodoolgy, Vol. 7, 2007, pp. 30

[18] Cox, T., Karanika, M., Griffiths, A., & Houdmont, J., ‘Evaluating organisational level work stress interventions: Beyond traditional methods’, Work & Stress, Vol., 21, No. 4, 2007, pp. 348-362.

[19] Griffiths, A., ‘Organizational interventions: Facing the limits of the natural science paradigm’, Scandinavian Journal of Work Environment and Health, Vol. 25, No. 6., 1999, pp. 589-596.

[20] Ovretweit, J., Evaluating Heath Interventions, Open University Press, Philadelphia, 1998.

[21] Hurrell, J.J.Jr., & Murphy, L.R., ‘Occupational stress interventions’, American Journal of Industrial Medicine, Vol 29, 1996, pp. 338-341.

[22] Van der Klink, J.J.L., Blonk, R.W.B., Schene, A.H., van Kijk, R.J.H., ‘The benefits of interventions for work-related stress’, American Journal of Public Health, Vol. 91, No. 2., 2001, pp. 270-276.

[23] Parks, K.R., & Sparkes, T.J., Organizational interventions to reduce work stress: Are they effective? A review of the literature, Norwich, Health and Safety Executive Books, 1998.

[24] Randall, R., Griffiths, A., & Cox, T., ‘Evaluating organizational stress-management interventions using adapted study designs’, European Journal of Work and Organisational Psychology, Vol. 14, 2005, pp. 23-41.

[25] Nielsen, K., Fredslund, H., Christensen, K.B., Albertsen, K., ‘Success or failure? Interpreting and understanding the impact of interventions in four similar worksite’, Work & Stress, Vol. 20, No. 3, 2006, pp. 272-287.

[26] Nielsen, K., Randall, R., & Albertsen, K., ‘Participants’ appraisals of process issues and the effects of stress management interventions’, Journal of Organisational Behavior, 28, 2007, pp. 793–810.

[27] Saksvick, P.O., Nytoro, K., Dahl-Jorgensen, C., Mikkelsen, A., ‘ A process evaluation of individual and organisational level interventions’, Work & Stress, Vol. 16, No. 1., 2002, pp. 37-57.

[28] Nielsen, K, Randall, R, Christensen, K.B., ‘Does training managers enhance the effects of implementing teamworking’, Human Relations, Vol. 63, 2010, pp. 1719-1742.

[29] Brun, J.P. & Lamarche, C., ‘Assessing the Costs of Work Stress’. Universite’ Laval Quebec, Canada, 2006.

[30] CIPD- Chartered Institute of Personnel and Development, ‘Building the Business Case for Managing Stress in the Workplace’, CIPD, London, 2008.

[31] Hoel, H., Sparks, K. and Cooper, C. L., ‘The Cost of Violence / Stress at Work and The Benefits of a violence / stress-free Working Environment’, Report Commissioned by the International Labour Organization (ILO) Geneva, 2001.

[32] Tangri R., ‘What Stress Costs’, Chrystalis Performance Strategies Inc., Halifax, 2002.

Further reading

Randall, R., & Nielsen, K., ‘Interventions to promote well-being at work’, In S. Leka & J. Houdmont (Ed.), Occupational Health Psychology, West Sussex, United Kingdom, John Wiley & Sons Ltd, 2010, pp. 88-123.

Select theme

Contributor

Juliet Hassard

Birkbeck, University of London, United Kingdom.

Tom Cox

Thomas Winski

Richard Graveling