There is increasing evidence from research in climate change and forest ecology that emergencies are becoming both more common and increasingly complex (Liu, Stanturf & Goodrick 2010). The complexity of emergencies is increasing with greater use of technology and more multi-agency responses (Owen et al. 2013). This is occurring in the context of increasing scrutiny of decision-making, declining volunteer numbers and financial challenges within agencies (Owen et al. 2016, Canton-Thompson et al. 2008). To meet the increasing complexity of emergency management now and into the future, the capability of people to function in these challenging environments needs to be developed. This involves developing skills and tools to help people perform their roles more effectively. This paper looks at teamwork and how to effectively monitor teams during an emergency response.
As part of their role in managing emergencies regional and state-level emergency managers monitor and adjust the activities of operational teams (Grunwald & Bearman, in press). This helps ensure that teams are functioning safely and efficiently. While team monitoring is seen as important (Conway 2016) it is often not done effectively. In many agencies there is little or no guidance on how to monitor teams from the position of operational oversight.
Effective teamwork is an essential component of providing an effective response to an emergency (AFAC 2013). However, from time-to-time emergency management teams may experience disruptions to their teamwork (Bearman et al. 2015). If these disruptions are not managed the team’s functioning and, ultimately, their operational performance will be impaired (Comfort et al. 2010, Bearman et al. 2015). The acceptance that breakdowns will occur and need to be managed shifts the emphasis away from blaming individuals to building systems designed to anticipate and manage errors and teamwork issues (Grunwald & Bearman, in press, Reason 1990). It is important then that leaders are able to identify disruptions to team performance at an early stage and take steps to resolve them.
A review of team monitoring literature was conducted to identify methods that can be used to monitor teams from the position of operational oversight. A brief overview of the literature review is presented to show the context for the development of the team monitoring tools.
Literature was accessed via online databases (Scopus and Google Scholar) that provide access to peer-reviewed journal articles. Articles that were on the topics of team performance, monitoring and assessment published between 2005 and 2015 were selected. The search based on these criteria yielded 195 peer-reviewed articles. In addition 78 seminal papers that were published before 2005 were included. These key papers were included in the review to clarify the origins of the measures and to contextualise the development of team monitoring.
The articles were narrowed down using specific inclusion and exclusion criteria. To be retained in the literature review the articles had to be in an area related to emergency management (such as aviation, healthcare or the military), had to report on a method that could be used by an external observer to monitor teams, had to include sufficient information to allow replication of the method and not be focused on internal monitoring by the team members themselves. This process yielded 64 articles. These articles were analysed using a thematic analysis technique to identify commonalities.
The literature review showed four key ways that a person who is not part of a team can monitor and adjust the activities of that team. These methods focus on different monitoring points of a team’s functioning and include:
Team outputs focus on the outputs a team produces, such as incident action plans. For example, participants in a study by Grunwald and Bearman (in press) identified that if information coming from a team was missing, incomplete, duplicated or conflicted with their expectations they would follow-up to investigate whether there was a problem with that team.
Information flow is concerned with who is communicating with whom, and when. This is not concerned with the content of the communication but when that communication occurs and who the communication is between. For example, Patrick and colleagues (2006) examined patterns of team information flow during a simulated nuclear power plant emergency to identify areas to improve supervisory monitoring.
Linguistic markers are concerned with non-mission oriented components of team communication. For example, Fischer and colleagues (2007) found that teams that used communication with negative affect or exhibited a high level of disagreement performed more poorly in a simulated search-and-rescue task than other teams.
Communication, coordination and cooperation processes focuses on the content of communication, the timing of contributions by team members and the shared attitudes and beliefs of the team. For example, Wilson and colleagues (2007) identified behavioural markers for the categories of communication, coordination and cooperation based on research on high-performing teams. If a team is not showing evidence of adequate communication, coordination and cooperation processes then this is likely to indicate there is a problem.
The four approaches to monitoring teams examine different levels of detail about team functioning. Of the four approaches the most detailed and comprehensive approach to team monitoring is the communication, coordination and cooperation approach. This provides a detailed analysis of the team’s behaviour based on a range of potential indicators. None of the other methods provide this level of detail. The team information flow, linguistic markers and team output approaches may reveal a problem in team functioning but only at a fairly general level. For example, if one member of the team is being neglected in the team’s communication or if the team is exhibiting a high level of disagreement this indicates a problem in team functioning but not necessarily what that problem is. In contrast by focusing on the detailed level of team behaviours the communication, coordination and cooperation approach can provide a nuanced understanding of what is occurring in that team.
The four approaches also differ in terms of how easy they are to use. The level of detail provided by the communication, coordination and cooperation approach potentially makes this slow to use. Similarly, the linguistic markers and information flow approaches also require a detailed analysis that may be slow to use. In contrast, the team outputs approach seems to be easy to integrate into the ongoing emergency management activities of senior officers, which potentially makes it fairly quick to use.
It is reasonable then to use a multiple method approach to monitor teams with one tool that is quick to apply providing a check on the team and a second tool that provides a detailed examination of team processes. Based on the literature review and informal discussions with end users involved in the development and testing of the tools, two methods of team monitoring were identified for further study. One method from the team output approach (quick to apply) and one method from the communication, coordination and cooperation approach (detailed examination of team processes).
The first method is the Emergency Management Breakdown Aide Memoire (EMBAM) developed by Grunwald and Bearman (in press). This method is based on monitoring team outputs and networks for evidence of breakdowns. It was selected because it was the only team output method specific to emergency management identified in the literature review. EMBAM allows team outputs to be examined for missing information, conflicting expectations and inconsistency. If the information contains any of these issues, if something doesn’t feel right, or if someone is not acting as one would expect, then the person monitoring the team is encouraged to investigate whether there are any issues interfering with team performance. The method also encourages people to make full use of their informal and formal networks to detect evidence that a team may not be functioning effectively. An example of the identification items in EMBAM is shown in Figure 1. EMBAM also contains five methods of resolving problems in team functioning. These are:
The second method of monitoring teams is known as Teamwork Behavioural Markers (TBM). TBM is a modified version of a set of teamwork behavioural markers developed by Wilson and colleagues. (2007) to examine teamwork breakdowns in a military setting. It was selected because it provides clear and observable components of teamwork, is based on extensive research and would be expected to be applicable to emergency management. The tool provides a list of communication, coordination and cooperation behaviours that should be observed in well-performing teams. An example of items on coordination in TBM is presented in Figure 2. TBM provides some indication of what might be going wrong in the team and provides a language to talk about teamwork issues.
To determine whether EMBAM and TBM are worthy of further consideration, a preliminary evaluation study was conducted. An iterative design process was adopted to develop team monitoring tools suitable for use by people with operational oversight of teams during emergencies. The iterative design process involves a cycle of developing and testing team monitoring tools in close conjunction with end users. This produces tools that meet the needs of the intended users rather than making end users adapt to tools that have been developed. The development and preliminary testing of EMBAM and TBM represents the first stage of this process.
The appendix from Grunwald and Bearman (in press) and a modified version of the set of behavioural markers presented by Wilson and colleagues (2007) were developed into paper-based checklists. The preliminary evaluation study was conducted during a simulated multi-agency emergency that required response teams to manage a mock aircraft accident at a small rural airfield. Four observers who were regional or state-level officers used EMBAM and TBM to consider the teamwork of their agency’s personnel during the response to the simulated emergency. The observers were recruited in advance through contacts within the agency and provided informed consent. The observers were asked about their overall impressions of the two tools, how effective the tools were for monitoring teams, what questions worked well and didn’t work well and whether any of the wording needed to be changed.
All observers indicated that EMBAM and TBM had potential as methods of monitoring teams from the position of operational oversight. TBM was seen as containing ‘a good range of questions’ and ‘depending on circumstance, it could provide a good self-review tool’. However, participants commented that the TBM ‘needs to be less wordy’ and ‘could be easier to interpret’. A number of questions (e.g. Did teams recognise when one performed exceptionally well?) were considered to be ‘difficult to assess as an observer’. With 38 items TBM was also considered to be too long.
EMBAM was used slightly out of context in this study (since it was designed for use at state and regional levels rather than for direct observation of incidents) and one observer pointed this out saying it ‘felt too difficult and would be better at RCC [Regional Coordination Centre] and SCC [State Coordination Centre] levels’. However, observers commented that EMBAM was ‘a good tool’ and is ‘useful to all that are supervising or managing others’. Another comment was that the order of the resolution actions in EMBAM should be changed so that replacing a member of staff was a last option.
This paper identified two methods (EMBAM and TBM) that can be used to monitor and modify the actions of teams. These two methods stemmed from a literature review on team monitoring. A preliminary study on EMBAM and TBM found that both tools are worth developing further. In the next phase of this research EMBAM and TBM will be revised in line with the comments of the participants. Further development and evaluation will be conducted with end users in an iterative design cycle. In this way team monitoring tools can be developed that provide a structured way to examine how teams are functioning. This allows people to monitor and adjust the activities of teams so that disruptions to team performance don’t translate into impaired performance. In an era of increasing challenges and complexity in emergency management it is important to develop tools that can help people and teams to function more effectively now and into the future.
AFAC 2013, The Australian Inter-Service Incident Management System, East Melbourne, Victoria, AFAC.
Bearman C, Grunwald JA, Brooks BP & Owen C 2015, Breakdowns in coordinated decision making at and above the incident management team level: an analysis of three large scale Australian wildfires. Appl Ergon, 47, pp. 16-25.
Bearman C, Rainbird S, Brooks B, Owen C & Curnin S 2016, A literature review of methods for providing enhanced operational oversight of teams in emergency management. Manuscript in Preparation.
Canton-Thompson J, Gebart KM, Thompson B, Jones G, Calkin D & Donovan G 2008, External Human Factors in Incident Management Team Decisionmaking and Their Effect on Large Fire Suppression Expenditures. Journal of Forestry, 106, pp. 416-424.
Comfort LK, Oh N, Ertan G & Scheinert S 2010, Designing adaptive systems for disaster mitigation and response: The role of structure. In: Confort LK, Boin A & Demchak CC (eds.) Designing Resilience: Preparing for extreme events Pittsburgh: Pittsburgh Press.
Conway G 2016, Monitoring the performance of incident management teams. Australian Journal of Emergency Management, vol. 31, no. 3, p. 26.
Fischer U, McDonnell L & Orasanu J 2007, Linguistic correlates of team performance: Toward a tool for monitoring team functioning during space missions. Aviation, space, and environmental medicine, 78, B86-B95.
Grunwald JA & Bearman C in press, Identifying and resolving coordinated decision making breakdowns in emergency management. International Journal of Emergency Management.
Liu Y, Stanturf J & Goodrick S 2010, Trends in global wildfire potential in a changing climate. Forest Ecology and Management, 259, pp. 685-697.
Owen C, Bearman C, Brooks B, Chapman J, Paton D & Hossain L 2013, Developing a research framework for complex multi-team coordination in emergency management. International Journal of Emergency Management, 9, pp. 1-17.
Owen C, Brooks B, Bearman C & Curnin S 2016, Values and complexities in assessing strategic level emergency management effectiveness. Journal of Contingencies and Crisis Management, 24, pp. 181-190.
Patrick J, James N & Ahmed A 2006, Human processes of control: Tracing the goals and strategies of control room teams. Ergonomics, 49, pp. 1395-1414.
Reason J 1990, Human error, New York, Cambridge University Press.
Wilson KA, Salas E, Priest HA & Andrews D 2007, Errors in the Heat of Battle: Taking a Closer Look at Shared Cognition Breakdowns Through Teamwork. Human Factors, 49, pp. 243-256.
Dr Chris Bearman, Central Queensland University, is a researcher and project leader for the Bushfire and Natural Hazards CRC decision-making, team monitoring and organisational learning project.
Dr Sophia Rainbird is a post-doctoral researcher and anthropologist specialising in safety, risk and resilience at the Central Queensland University.
Dr Benjamin Brooks is a human factors researcher and Senior Research Fellow in the Australian Maritime College at the University of Tasmania.
Dr Christine Owen is an organisational behaviour and learning researcher at the University of Tasmania.
Dr Steve Curnin is a research fellow at the Tasmanian Institute for Law Enforcement Studies at the University of Tasmania.