Warning system design in civil aircraft Dissertation Essay Help

Order Description
This is a three in one module coursework, had not so pleasant experience with one of the last writer and had to request of the coursework to be rewritten twice, but the other two writer did
excellent job. will like to avoid that particular writer.

appreciate if careful attention is given to the examiners requirements. A lot of documents will be attached to give a good insight and provide almost all required information but not all will be
needed to complete the 3 questions.
141
6 Warning system design in
civil aircraft
Jan M. Noyes, Alison F. Starr and
Mandana L.N. Kazem
Defining warning systems
Within our society warnings are commonplace, from the natural warning colours
in nature, the implicit warning proffered by the jagged edge of a knife, to
packaging labels and the more insistent auditory warnings (e.g. fire alarms)
requiring our immediate attention. Primarily a means of attracting attention the
warning often, and most beneficially, plays both an alerting and informational
role, providing information about the nature and criticality of the hazard.
In many safety critical applications hazards are dynamic and may present
themselves only under certain circumstances. Warning systems found in such
applications are, therefore, driven by a ‘monitoring function’ which triggers when
situations become critical, even life-threatening, and attention to the situation (and
possibly remedial actions) are required. In summary, current operational warning
systems have the following functions:
1. Monitoring: Assessing the situation with regard to deviations from predetermined
fixed limits or a threshold.
2. Alerting: Drawing the human operators’ attention to the hazardous or
potentially hazardous situation.
3. Informing: Providing information about the nature and criticality of the
problem in order to facilitate a reaction in the appropriate individual(s) who
is (are) assessing the situation.
4. Advising: Aiming to support human decision-making activities in
addressing the abnormal situation through the provision of electronic and/or
hardcopy documentation.
Safety-critical industries continually strive to attain operational efficiency and
maximum safety, and warning systems play an important role in contributing to
these goals. The design of warning systems in the civil flight deck application
will be considered here from the perspective of the user, i.e. as reported by the
crew. This emanates from a research programme concerned with the development
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
142
of an advanced warning system in this application area. One aspect of this
programme included a questionnaire survey of civil flight deck crew from an
international commercial airline; the aim being to highlight the user requirements
of future warning systems. Some of the findings from this work are discussed
towards the end of the three sections on alerting, informing and advising in order
to bring the pilots’ perspective to the design of future warning systems. This is
done within the context of the functions of the warning system highlighted in the
definition given at the start of this chapter.
Monitoring
The monitoring function is primarily a technology-based activity as opposed to a
human one. The role of the monitoring function is to ‘spot’ the deviation of
parameters from normal operating thresholds. When these threshold conditions
are crossed, a response from the warning system is triggered. The crossing of that
threshold has then to be brought to the attention of the operator. On the flight
deck, this is usually achieved through auditory and/or visual alerts. The earliest
monitoring functions were carried out by operators watching displays of values
waiting for this information to move outside of a limit. The simplest mechanical
sensor is activated when a set threshold condition is met. The mechanisms by
which the monitoring is now undertaken will vary from application to application,
depending on aspects relating to the safety critical nature of the system, the
functions being monitored, complexity of the system, and level of technology
involved. However, as the focus of this chapter is on the human activities these
mechanisms will not be discussed further and the three functions ‘alerting’,
‘informing’, ‘advising’ will provide the framework for consideration in the rest of
this chapter.
Alerting
In a complex system and when the situation is particularly critical a large number
of auditory and visual alerts can be activated, as in the Three Mile Island incident
(Kemeny, 1979). In this particular case, over 40 auditory alarms were triggered
and around 200 windows and gauges began to flash in order to draw the operators’
attention to the impending problem (Smither, 1994). A number of difficulties can
occur at this stage. For example:
a. The human operator(s) may fail to be alerted to the particular problem due
to overload or distraction. This can sometimes occur even with the
existence of the ‘attention-grabbing properties’ of the alerting system. An
example of this occurred on the Eastern Airlines L-1011 flight in 1972. All
of the flight deck crew became fixated with a minor malfunction on the
flight deck, leaving no operator flying or monitoring the rest of the aircraft.
Alerts indicating the unintended descent of the aircraft and thus significant
Warning system design in civil aircraft
143
fall in altitude were unsuccessful in regaining the attention of the crew and
alerting them to the hazardous situation developing. The result was that the
aircraft crashed into the Everglades swamps with disastrous results (Wiener,
1977).
b. The alerting signal may also be inaccessible to the operator if sensory
overload occurs. Sensory overload at this early stage is a growing problem
as the number of auditory and visual alerts on the flight deck continues to
increase. In their survey of alarm management in chemical and power
industries, Bransby and Jenkinson (1997) found that the total number of
alarms on older plants was generally less than the total number found on the
modern computer-based distributed control systems. Likewise on the civil
flight deck, the number of auditory and visual alerts has increased over the
decades. For example, during the jet era the number of alerts rose from 172
on the DC8 to 418 on the DC10, and from 188 on the Boeing 707 to 455 on
the Boeing 747 (Hawkins, 1987), and to 757 on the newer Boeing 747-400.
This increase has largely been seen as a result of enhanced aircraft system
functionality and therefore a more general increase in system complexity.
Paradoxically, this increase in the number of alerts intended to help crew
comprehend the ‘dangerous’ situation can lead to the reverse effect,
especially in situations where several alerts appear simultaneously and are
abstract, therefore requiring association with a meaning. A recent Federal
Aviation Authority (FAA) report highlighted this by stating ‘the more
unique warnings there are, the more difficult it is for the flight crew to
remember what each one signifies’ (Abbott, Slotte and Stimson, 1996, p.
56). When crew are overloaded with auditory alerts and flashing visual
messages, it may actually hinder appropriate response and management of
the situation.
It is important in the design of alerting systems to ensure that the flight crews’
attention will be drawn to a problem situation at an early stage in its development.
Flight deck alerting systems all have at least two levels of alert. The caution,
indicating that awareness of a problem and possible reaction is required, and the
warning, indicating a more urgent need for possible action. Ideally the alerting
system should enable the pilot to follow transitions between new ‘critical’
developments, and in conjunction with the flight deck information, as well as
maintaining awareness at all times of the current state of play.
Having a system that facilitates the anticipation of problems would provide the
crew with more time to consider the outcome of making various decisions. An
example of this can be seen in the EGPWS (Enhanced Ground Proximity Warning
System) found on some civil flight decks. In this system, dangerous areas of
terrain, as relating to aircraft position, are depicted on a display. Increasing risk is
depicted by a change in colour or colour saturation. Effectively this is an alert of
changing urgency, which should direct crew attention to problems at an early
stage (Wainwright, 2000).
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
144
Individuals amongst the flight crew surveyed, who flew aircraft with one of the
types of CRT-based warning systems, tended to agree that their aircraft’s alerting
system was effective in allowing them to anticipate a problem (Noyes and Starr,
2000). This is not surprising since their alerting system was designed with a low
level alert that triggered before the main caution or warning alert, thus allowing
problems to be anticipated. On this aircraft, the low level alerting element of the
system automatically displays the relevant system synoptics when parameters drift
out of tolerance, but before they have changed sufficiently to warrant a full
caution or warning level alert. The other salient feature evident from the survey
was that fleets with a third crewmember were also found to be in agreement with
the fact that current systems allow anticipation. These systems facilitate
anticipation, but do not ‘anticipate’ themselves. In a three-person flight crew, part
of the Flight Engineer’s role is to monitor system activity and anticipate failures.
In a two-person crew however, this aspect of systems’ management has been
replaced by increased numbers of cautions and warnings. Once these are
triggered, operators must undertake prescribed set actions. A possible solution
exists in developing systems, which can absorb this anticipatory role.
A truly anticipatory system has yet to be introduced to the flight deck.
However, there are many design difficulties in producing an anticipatory system
to be implemented in such a complex and dynamic environment. Given this fact it
is prudent to remember that design should not seek to replace the decision-maker,
it must support the decision-maker (Cohen, 1993); indeed, in some instances the
system design may not be capable of effectively replacing the decision-maker.
Results from our survey work highlighted some of the difficulties associated with
the development of anticipatory facilities. For example, the following comments
were made by flight deck crew in response to a question about having a warning
system with an anticipatory facility:
‘Most serious problems on the aircraft are virtually instantaneous –
instrumentation giving anticipation would be virtually useless except on noncritical
systems.’
‘Workload could be increased to the detriment of flight safety.’
‘Much aircraft equipment is either functioning or malfunctioning and I think
it lowers workload considerably to avoid unnecessary instrumentation and
advise pilots only of malfunctions.’
It could therefore be argued that perhaps it is best to leave the crew to fulfil all
but the simplest anticipatory tasks. The crews are after all the only individuals
with the benefit of experiencing the situation in hand; they may have information
not available to the system and therefore arguably are the only decision-makers in
a position to make appropriate predictions. Our survey work also indicated that
flight deck crew with experience of having a flight engineer bemoaned the fact
that the role of this person was gradually being phased out. This is particularly
pertinent given the anticipatory function of the flight engineer. However, systems
are becoming increasing complex. Interrelationships between aspects of different
Warning system design in civil aircraft
145
systems and the context in which a problem occurs are important factors in what is
significant for operator attention and what is not. Thus, returning to Cohen’s idea
of required operator support, some assistance with the anticipatory task could, if
correctly implemented, result in the better handling of problem situations.
A further consideration relating to alerting is that not all warnings may be ‘true’
warnings, as all warning systems can give false and nuisance warnings. False
warnings might occur, for example, when a sensor fails and a warning is
‘incorrectly’ triggered. In contrast, nuisance warnings are by definition accurate,
but unnecessary at the time they occur, e.g. warnings about open doors when the
aircraft is on the ground with passengers boarding, or a GPWS (Ground Proximity
Warning System) warning that occurs at 35,000 feet activated by an aircraft
passing below. Nuisance warnings tend to take place because the system does not
understand the context. The category of nuisance warnings may also be extended
to include warnings that are correct and relevant in the current situation, but have
a low level of significance under certain circumstances. For example, in some
aircraft, the majority of warnings will be inhibited during take-off as the
consequences of the fault(s) they report are considered to be low in contrast to
their potential to interrupt the crew during what can be a difficult phase of flight.
It could be concluded from our survey work that false warnings on modern
flight decks do not present a major problem, although in the words of one
respondent ‘One false warning is “too often”.’ If false or nuisance warnings occur
too frequently, they can encourage crews to become complacent about warning
information to the extent that they might ignore real warnings. This was summed
up by two respondents as follows: ‘… nuisance warnings have the effect of
degrading the effectiveness of genuine warnings.’ and ‘a small number of
‘nuisance’ warnings can quickly undermine the value of warnings’. Hence, there
is a need to minimise false and nuisance warnings at all times. This may not be
possible with existing systems, but their reduction needs to be a consideration in
the design of new systems.
Another related problem of increasing concern involves the sensors on the
aircraft that fail more often than the systems themselves. As already discussed,
sensors failing may trigger a false warning condition, and a warning system that
could differentiate and locate possible sensor failures would have operational
benefits. Systems with such capability would better inform the crew and thus help
prevent them from taking unnecessary remedial actions and ensure the
maintenance of the full operating capability of the aircraft.
There are a number of different system solutions that could be implemented and
developed to overcome these problems. More reliable sensors that fail less often
comprise one mechanism for reducing false and nuisance warnings. The use of
context such as phase of flight to suppress warnings in order not to interrupt a
critical phase of flight with information is a feature on the new ‘glass’ warning
systems. These aircraft suppress all but the most critical warnings from 80 knots
to rotation, since at this point of the flight it will almost always be safer to leave
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
146
the ground than attempt to stop since there may not be enough runway left to do
this. This type of contextual support could be used to provide better information
in the future. For example, sensor or warning logic that considers context such as
simple logic relating to weight on wheels and no engines running in order to
restrict an alert relating to a warning about the aircraft doors being open.
However, for other conditions, several more complex pieces of data may be
required and an ‘understanding of the goal’ of the warning.
Informing
Once the alert has been given, the operator(s) must use the information provided
by the alerting system, their knowledge, experience, and training as well as other
information displayed to them to be able to understand the nature and seriousness
of the problem. However, a number of human operator failures may affect this
process. Having been successfully alerted to a problem, the operator(s) may
respond by acknowledging the visual and auditory alerts, but fail to take any
further action, i.e. the operator(s) demonstrate a lack of compliance. On the civil
flight deck, crew bombarded by several loud auditory warnings (bells, buzzers and
other alarms) often initially cancel the alarms before attending to the problem.
However, this action of cancellation is no guarantee that they will do anything
further in terms of remedial action. This problem of initial response followed by
no further action has been well documented in aviation and medical environments
(see, Campbell Brown and O’Donnell, 1997; Edworthy, 1994). There are many
reasons for this. The crew may be distracted by the need to complete other
activities, and once having switched off the alerts may fail to turn their attention to
the reasons why the alerts occurred in the first place. Edworthy and Adams
(1996) studied the topic of non-compliance to alarms and suggested that operators
carry out a cost-benefit analysis in order to evaluate the perceived costs and
benefits of compliance and non-compliance to alarm handling. Information from
the warning system (including urgency information) will be considered in this
evaluation. Therefore, there is a need for the warning system to depict accurately
the nature and criticality of the problem in order to provide accurate information
for the pilot to aid their decision-making. At present there is much room for
improvement in this respect, especially with regard to auditory warnings
(Edworthy and Adams, 1996). For example, auditory alarms often activate too
frequently and are disruptively and inappropriately loud (Stanton and Edworthy,
1999). They also can be relatively uninformative. To quote a respondent from
our survey of civil flight deck crew ‘a lot of our audio systems are so powerful
they scare you half out of your skin without immediately drawing you directly to
the reason for the warning’ (Eyre, Noyes, Starr and Frankish, 1993).
Individuals need to assess the nature and extent of the difficulty, and to locate
the primary cause in order to initiate remedial actions. They have to evaluate and
consider the short-term implications of the difficulty, its criticality/urgency, any
Warning system design in civil aircraft
147
compromise to safety and immediate actions required, as well as the longer-term
consequences for the aircraft, its systems and the operation/flight being
undertaken. The consequences of any action taken, whether immediate or
planned, must also be included in the assessment. In the development of new
alerting ‘supportive’ systems, this is the type of information that could be of
significant use to the operator. The underlying system would need to facilitate the
provision of this type of information, which then has to be presented to the
operator.
The situation being monitored is often complex with many components,
influences and interactions, and there is a need to take into account a large number
of parameters in order to assess the situation. Optimally the alerting system
should assimilate relevant information from a number of sources or facilitate this
task. This is difficult to realise in design as it is not always possible to predict
which elements of the potential information set will be relevant to each other and
to the particular situation. However, approaches are available which enable the
relationships between elements, systems and context to be represented as we
indicated in our work on using a model-based reasoning approach to the design of
flight deck warning systems. In the past, integration of context/situation
information into the design of alerting systems has not been developed to any
great extent. For example, in the avionics application, warnings have been known
to be given relating to the failure of de-icing equipment when the aircraft was
about to land in hot climes, where there would be no need to have de-icing
facilities available.
Multiple warning situations are known to be a problem for crew, since the
primary failure may be masked by other less consequential cascade or concurrent
failures that take the crew’s attention, and maybe hinder location of the primary
cause. Cascade failures are failures that occur as a result of the primary failure
e.g. failure of a generator (primary failure) causing the failure of those systems
powered by the generator (secondary failures). However, secondary failures may
be displayed before the primary as the display of a warning in most systems is
related directly to the point at which the threshold associated with a warning is
crossed. To quote one crewmember ‘I find it very difficult in multi-warning
systems to analyse and prioritise actions’. A further problem relates to concurrent
failures. The problem-solving characteristics of human operators are such that we
tend to associate alerts occurring simultaneously (or within a short space of time)
as having the same cause when this may not be the case (Tversky and Kahneman,
1974). Concurrent failures may also cause conflict in terms of remedial actions;
i.e. one solution may resolve one problem but worsen the situation for another. It
can therefore be quite difficult for crew to handle warning information in these
types of situation.
Many current alerting systems present warnings/cautions in the order in which
the signal reaches the method of display, and this has implications for the handling
of warning information. With classic central warning panels, large cascade type
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
148
failures lead to distinctive patterns of lights; recognition of these patterns can
enable the crew to identify the primary cause hidden amidst the mass. With glass
multifunction alerting systems, alerts are listed by criticality, e.g. all red warnings
first followed by all the amber caution alerts. In general, within each of these
categories temporal ordering is still used; new alerts enter at the top of the
appropriate list (warning or caution list). This creates effectively a dynamic list
and can result in the primary causes of multiple alert situations becoming
embedded within its associated category list and possibly ‘hidden’ from view.
The crew in our survey noted this: ‘… it would be helpful if the most urgent was
at the top of the list’. However, some of these systems do use a limited set of
rules to analyse the incoming warning information and identify a set of key
primary failures which can lead to cascade effects e.g. generator failure. These
systems will pull out primary failures and present them first.
The issue of handling secondary failures was addressed within the survey. Just
under two-thirds of the flight deck crew (65%) surveyed felt that the alerting
systems on their current aircraft were deficient in providing consequential
secondary information. A closer analysis of this 65% indicated a clear
disagreement between flight crew of glass flight deck aircraft and crew of other
aircraft fleets. Less than 5% of the former group believed their alerting systems to
be deficient in this respect, indicating that the vast majority was satisfied.
Conversely between 45% and 70% of the respondents from each of the other
aircraft fleet groups regarded the provision of such secondary information, on
their aircraft, to be sub-optimum. Therefore, future alerting system designs should
facilitate the provision of secondary information.
Advising
A further aspect of the alerting system involves the use of instructional
information to support human decision-making activities, and ensure remedial
actions are appropriate and successful. On current flight decks, supporting
documentation can be both screen-based and in hard-copy format, whereas on
classic aircraft, i.e. aircraft that have warnings based on fixed legends on lights,
this information is provided in a paper Quick Reference Handbook (QRH). The
way in which this information is handled will depend on the severity, complexity
and frequency of the situation that activated the alert(s), as well as operator
experience, skills and knowledge. However, it should be noted that designers do
not always view advisory documentation as part of the alerting system. In our
work with flight deck crew it was viewed as an integral part of the alerting
systems, although, in certification terms, it may not be viewed as an essential
component of the operating system.
All of the aircraft within the questionnaire survey had a QRH or equivalent
document, e.g. the Emergency Checklist on the DC-10. For each aircraft, this
document serves as the primary source of reference for the necessary remedial
Warning system design in civil aircraft
149
actions to be taken in abnormal flying situations. The documentation is originally
designed by the airframe manufacturer and modified by the management of the
operating company to meet their operating procedures. It would seem that there
might be a trade-off between the level of completeness of the QRH information
(e.g. its quantity and detail) and the ease with which the document can be used,
i.e. the more information provided, the more difficult the document is to use in
practise. Paper presentation of such information will inevitably lead to this
problem as the information provided must be complete and therefore by nature
will be difficult to present in a format that can be used quickly and effectively.
Glass display presentation, on the other hand, could potentially help the pilot to
locate the appropriate material quickly by tailoring the information presented to
the situation.
Evolution of flight deck warning systems
This lack of assimilation is apparent throughout the evolution of flight deck
alerting systems (see, Starr, Noyes, Ovenden and Rankin, 1997, for a full review).
Briefly, the early warning systems were a series of lights positioned on the
appropriate systems’ panels, and so were located across the flight deck (GordenJohnson,
1991). At this stage of evolution, warning indications were
predominately visual, and crew had to scan the panels continually to check for the
appearance of a warning. This discrete set of annunciators was gradually replaced
by the ‘master/ central warning and caution’ concept, which involved the addition
of a master light that indicated to crew that a warning had been activated. This
was further developed into a centralisation of warning lights on a single panel
within the crew’s forward visual field (Alder, 1991).
The next development beyond physically locating the alerts together would be
to ‘integrate’ the alerting information for presentation to the crew, as mentioned
earlier. Although modern flight deck displays are referred to as integrated, they
are not truly integrated since they consist of single elements of information
displayed together according to circumstances and the current phase of flight
(Pischkle, 1990). A fully integrated alerting system would be capable of
monitoring and interpreting data from aircraft systems and flight operational
conditions in order to provide crew with a high-level interpretation of the
malfunction in the event of failures and abnormal conditions.
A fully integrated warning system has yet to be realised to any great extent
even in the latest civil aircraft, traditional alerting systems are generally used
which conform to a ‘stimulus’ (e.g. valve out limits) followed by ‘response’ (e.g.
warning light) concept. Also, monitoring to an identified risk point is traditional,
and in the past there has been a lack of sophisticated display and control
technology to achieve integration. This may be due to the inherent design
difficulties in predicting information requirements, briefly noted earlier, and
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
150
previous lack of technical ability to realise such a systems solution. However, the
advent and implementation of more sophisticated software and programming
techniques means that alerting systems with a greater capability to integrate
information from a variety of sources can be developed, and such solutions are
gradually becoming a more realistic proposition (Rouse, Geddes and Hammer,
1990). Care must be taken not to allow such systems to exceed their inherent
limitations (due in part to our limited ability to predict the information
requirements of unpredictable situations) or reduce data visibility. O’Leary
(2000) indicates that the very task of converting data to knowledge is vital to the
pilot in facilitating good pilot decision-making and therefore we must think
carefully before removing this role from the crew.
A further point of contention relates to the certification requirements of alerting
systems. Given the criticality of alerting information it may be that the certification
requirements prevent such systems becoming feasible or economically viable.
However, by functionally separating the primarily alerting processes from the more
informational and supportive processes of future alerting systems it may be possible
to incorporate data integration into a ‘support system’ whilst leaving the more
critical ‘alert’ to follow the more easily certifiable ‘stimulus-response’ concept.
General discussion
During each of the alerting, informing and advising functions, operator-involved
failures can occur: human operators may fail to be alerted to the warning, may fail
to assess it adequately, may neglect to respond to the warning situation and/or may
not make sufficient use of information available. As already stated, they may take
immediate action, but fail to make follow-up actions that will lead to the restoration
of normal operations, a point well documented by Campbell-Brown and O’Donnell
(1997) in their work on alarms in distributed control systems. In the process
control industry, as well as aviation, there are many reasons for this, from the
design of the warning system per se to task considerations and the overall design
philosophies of the organisation, operating policies and procedures, extending to
(user) practices (Degani and Wiener, 1994; Edworthy and Adams, 1996).
Analyses of specific human responses to warnings and explanations of their
failures are complex and multi-faceted, and outside the remit of the current chapter.
Perhaps the very idea of having humans interact with warning systems is a
problematic one. In many situations, the main part of the operator’s job may be
uneventful to the point of boredom with long periods of monitoring required. This
state can change very quickly when an event triggers an alarm or number of
alarms. Hence, the monitoring phase is interrupted by rapid activity, the
occurrence of which cannot be easily predicted, and may result in information
overload as the monitoring role assumed by the human operator changes to
diagnostician. This latter role requires the user to comprehend and remedy what
Warning system design in civil aircraft
151
may be a complicated, multi-causal, often stress-inducing situation. Further, there
may be little time available to support decision-making activities. This was
described by Wickens (1992, p. 508) as ‘hours of intolerable boredom punctuated
by a few minutes of pure hell’. Jenkinson (1997) stated that further work is
clearly needed on this transition between boredom and panic.
Humans tend not to be good at either of the aforementioned task extremes –
monitoring tasks or working under high stress levels. Activities under both
conditions are error-prone. Evidence for this can be seen from the number of
aircraft accidents (and incidents) that implicate human error as a primary cause.
Figures differ according to definitions of error and methods of calculating accident
and incident data, but human error has been given as a causal factor in 80% of
fatal aircraft accidents in general aviation and 70% in airline operations (Jensen,
1995). Recent statistics indicate there were 1063 accidents worldwide involving
commercial jet aircraft between 1959 and 1995 of which 64.4% cited flight crew
error as a primary cause (Boeing, 1996). The incidence of decision-making errors
in these events is estimated to be as high as 70% (Helmreich and Foushee, 1993).
However, there is a need to recognise that placing the ‘blame’ on human error
does not provide the full explanation of how and why an accident or incident
occurred, neither does it take into account the multi-causal chain of events and
circumstances leading up the error (Noyes and Stanton, 1997; Reason, 1990).
Consequently, the concept of the ‘system-induced human error’ has become
widely recognised (Wiener, 1987) and as a result the onus has been placed on
cockpit design as a whole to alleviate this problem.
The detection and notification of problems (generally via cautions) can
sometimes lead to increased workload. Information overload is certainly thought
to be an issue in multiple alert situations. Although over the years, developments
in alerting systems have aimed to provide the flight deck crew with the
information they need, in a form they can readily understand and at an appropriate
time, there are still occasions when information overload occurs (see, Starr et al.,
1997). For example, multiple failure situations will trigger large numbers of
lower level alerts, and can generate copious warnings. Despite this, crew agreed
that additional information about the consequences of planned actions and
secondary consequences of malfunctions would be an improvement on current
systems, even in view of the inevitable increase in information presented. The
prospect of this further increase in information is balanced by the difficulties faced
currently in managing this type of failure situation.
Finally, when considering supporting documentation, the balance between the
amount and level of detail given, and the ease of access to relevant information
must be considered. A fundamental problem with existing checklists (as
combined within the QRH), both paper and multi-function displays, is that they
are designed so that each checklist is associated with one failure/abnormality. In
the event of multiple failure situations, priorities are generally not adequately
handled. This is a problem and it has already been highlighted that pilots would
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
152
like to see more support in this area. However, any further development of the
QRH concept (paper or screen) must keep in mind that pilots literally require a
Quick Reference Handbook.
Looking to the future, continuing technological developments mean that future
alerting systems will have the capability for handling increasingly large amounts
of data/information. Unless this is carefully managed, the human operator will
inevitably suffer from information overload. This has already been experienced in
the nuclear power industry with operators being presented with large amounts of
raw data that previously would have been filtered by experienced watch keepers
(Maskell, 1997). It may be that progress will depend not only on technological
advances, but on making greater use of the data already available (Frankish,
Ovenden, Noyes and Starr, 1991); perhaps finding new ways to display
information. Furthermore, the development of ‘soft displays‘ supported by
powerful computational resources has important implications for the design of
future flight deck warning systems. By providing information tailored directly to
the current requirements of the users, this type of interface could not only aid the
human operator, but also provide a solution in terms of enabling further
information to be provided on an already crowded flight deck. The limitations of
such displays however must be understood and duly considered.
The alerting system is an essential component of any safety-critical system,
since it is instrumental in drawing the attention of the operator to a problem
situation at a point when safety can still be maintained. To be successful in this
role the system must effectively monitor, alert, inform and support the operator in
order that the problem can be efficiently diagnosed and rectified/ contained.
Continuing developments in advanced technologies and the use of more
‘intelligent processing’ in systems have increased the number of design
possibilities for warning systems and may provide solutions in terms of managing
information overload. However the solution to information overload may lie in
information efficiency – it may be possible to combine alerting, informing and
supporting functions by providing information that performs all three roles
simultaneously. For example, by aiming to develop the alerting aspects of the
alerting system to be more informative and striving for new ways to maintain
aircraft, system and environment visibility to make the problem itself more visible
and therefore naturally alerting. Indeed it has been suggested that the situation
itself may provide the most important warning cues (Edworthy and Adams, 1996).
Perhaps this aspect will be addressed in future work on warning systems.
References
Abbott, K., Slotte, S. and Stimson, D. (1996). The interfaces between flightcrews
and modern flight deck systems Report of the FAA Human Factors Team.
Washington DC: Department of Transportation.
Warning system design in civil aircraft
153
Alder, M.G. (1991). Warning systems for aircraft: A pilot’s view. In, Proceedings
of IMechE ‘Philosophy of Warning Systems’ Seminar S969. London: Institution
of Mechanical Engineers.
Billings, C.E. (1997). Aviation automation: The search for a human-centred
approach. New Jersey: LEA.
Boeing Airplane Company (1996). Table of all accidents – World-wide
commercial jet fleet. Flight Deck, 21, 57.
Bransby, M.L. and Jenkinson, J. (1997). Alarm management in the chemical and
power industries: A survey for the HSE. In, Proceedings of IEE Digest 97/136
‘Stemming the Alarm Flood’. London: Institution of Electrical Engineers.
Campbell Brown, D. and O’Donnell, M. (1997). Too much of a good thing? –
Alarm management experience in BP Oil. In, Proceedings of IEE Digest
97/136 ‘Stemming the Alarm Flood’. London: Institution of Electrical
Engineers.
Cohen M. (1993). The bottom line: Naturalistic decision aiding. In, G. Klein, J.
Orasanu, R. Calderwood, and C. Zsambok, (Eds.), Decision making in action:
models and methods. New Jersey: Ablex.
Degani, A. and Wiener, E.L. (1994). On the design of flight-deck procedures.
NASA Contractor Report 177642. NASA-Ames Research Center, CA: NASA.
Edworthy, J. (1994). The design and implementation of non-verbal auditory
warnings. Applied Ergonomics, 25, 202-210.
Edworthy, J., and Adams, A. (1996). Warnings design: A research prospective.
London: Taylor and Francis.
Eyre, D.A., Noyes, J.M., Starr, A.F. and Frankish, C.R. (1993). The Aircraft
Warning System Questionnaire Results: Warning Information Analysis Report
23.3, MBRAWSAC Project. Bristol: University of Bristol, Department of
Psychology.
Frankish, C.R., Ovenden, C.R., Noyes, J.M. and Starr, A.F. (1991). Application of
model-based reasoning to warning and diagnostic systems for civil aircraft. In,
Proceedings of the ERA Technology Conference ‘Advances in Systems
Engineering for Civil and Military Avionics’ (ERA Report 91-0634, pp. 7.1.1 –
7.1.7). Leatherhead: ERA Technology.
Gorden-Johnson, P. (1991). Aircraft warning systems: Help or hindrance
philosophy of warning systems. In, Proceedings of IMechE ‘Philosophy of
Warnings Systems’ Seminar S969. London: Institution of Mechanical
Engineers.
Hawkins, F.H. (1987). Human Factors in flight. Aldershot: Ashgate.
Helmreich, R.L. and Foushee, H.L. (1993). Why crew resource management?
Empirical and theoretical bases of Human Factors training in aviation. In, E.L.
Wiener, B.G. Kanki and R.L. Helmreich (Eds.), Cockpit resource management,
(3-45). San Diego: Academic Press.
JAR-25. Joint Aviation Requirements for large aeroplanes. Civil Aviation
Authority, Cheltenham: Joint Aviation Authorities Committee.
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
154
Jenkinson, J. (1997). Alarm reduction in nuclear power plants: Results of an
international survey. In, Proceedings of IEE Digest 97/136 ‘Stemming the
Alarm Flood’. London: Institution of Electrical Engineers.
Jensen, R.S. (1995). Pilot judgement and crew resource management. Aldershot:
Avebury Aviation.
Kemeny, J. (1979). The need for change: The legacy of TMI Report of the
President’s Commission on the Accident at Three Mile Island. New York:
Pergamon.
Learmount, D. (1995). Lessons from the cockpit. Flight International 11-17th
January, 24-27.
Maskell, P. (1997). Intelligent surveillance for naval nuclear submarine propulsion
In Proceedings of IEE Digest 97/136 ‘Stemming the Alarm Flood’. London:
Institution of Electrical Engineers.
Newman, T.P. (1991). Cephalic indications, nictitations and anopic ungulates. In,
Proceedings of IMechE ‘Philosophy of Warning Systems’ Seminar S969.
London: Institution of Mechanical Engineers.
Noyes, J.M. and Starr, A.F. (2000). Civil aircraft warning systems: Future
directions in information management and presentation. International Journal
of Aviation Psychology, 10, 169-188.
Noyes, J.M., Starr, A.F. and Frankish, C.R. (1996). User involvement in the early
stages of the development of an aircraft warning system. Behaviour and
Information Technology, 15, 67-75.
O’Leary, M. (2000). Situation Awareness: Has EFIS Delivered? In, Proceedings
of ‘Situational Awareness On The Flight Deck: The Current And Future
Contribution By Systems And Equipment’ 2000. London: Royal Aeronautical
Society.
Perrow, C. (1984). Normal accidents. New York, NY: Basic Books.
Pischkle, K.M. (1990). Cockpit integration and automation: The avionics
challenge. In, Proceedings of ‘International Federation of Airworthiness’
Conference, November 19th
.
Reason, J.T. (1990). Human error. Cambridge: University Press.
Rouse, W.B., Geddes, N.D. and Hammer, J.M. (1990). Computer-aided fighter
pilots. IEEE Spectrum, 38-40.
Smither, R.D. (1994). The psychology of work and human performance. New
York: Harper Collins.
Stanton, N.A. and Edworthy, J. (Eds.) (1999). Human Factors in auditory
warnings. Aldershot: Ashgate.
Starr, A.F., Noyes, J.M., Ovenden, C.R. and Rankin, J.A. (1997). Civil aircraft
warning systems: A successful evolution? In, H.M. Soekha (Ed.) Proceedings
of IASC ‘97 (International Aviation Safety Conference). Rotterdam,
Netherlands: VSP BV.
Taylor, R.M., Selcon, S.J. and Swinden, A.D. (1995). Measurement of situational
awareness and performance: A unitary SART index predicts performance on a
Warning system design in civil aircraft
155
simulated ATC task. In, R. Fuller, N. Johnston and N. McDonald (Eds.),
Human Factors in aviation operations. Aldershot: Avebury Aviation.
Tversky, A. and Kahneman, D. (1974). Judgement under uncertainty: Heuristics
and biases. Science, 185, 1124-1131.
Wainwright, W. (2000). Integration of Situational Awareness on Airbus Flight
Decks. In, Proceedings of ‘Situational Awareness On The Flight Deck: The
Current And Future Contribution By Systems And Equipment’ 2000. London:
Royal Aeronautical Society.
Wiener, E.L. (1977). Controlled flight into terrain accidents: System-induced
errors. Human Factors, 19, 171-181.
Acknowledgements
This work was carried out as part of a UK Department of Trade and Industry
funded project, IED:4/1/2200 ‘A Model-Based Reasoning Approach to Warning
and Diagnostic Systems for Aircraft Application’. Thanks are due to the late
David Eyre for his meticulous data analyses.

Need help with this Essay/Dissertation?
Get in touch Essay & Dissertation Writing services

Is this question part of your assignment?

Place order