Back to Safety topics

Definitions and measurements

Falls have been defined and reported in different ways. To track and trend fall data accurately and consistently, it is important for each organization to establish a fall definition. References for the fall definitions listed below are provided in the appended resources.

Reference for fall definitions from Morris & Isaacs, Kellogg and National Quality Forum (NQF)

Reference for fall definition from Florida Hospital Association

Measurements

Rates: Comparing fall rates among different institutions is difficult because of varying fall definitions, methods to report data and differences in settings and patient populations, and the lack of risk adjustment. The most reliable and useful approach for any organization is an examination of its own quality indicator data over time -- with the ultimate goal of reducing and eliminating all preventable falls.

The most commonly used statistic to measure and track falls is the “fall rate,” which is calculated as follows:

Fall Rate =

Number of patient falls x 1,000


Number of patient days

The fall rate for a specified time period is defined as the total number of eligible falls divided by the total number of eligible patient days, multiplied by a constant or “k” of 1,000 to create a rate per 1,000 patient days. (The k=1000 enables the use of a whole number as opposed to fractions, permitting easier manipulation of the data.) Note that all falls are included in the formula, not all patients who have fallen, so that repeated falls experienced by the same patient are included in the numerator. The National Quality Foundation (NQF), the Maryland Indicator project, and others use this rate.

Other rates found in the literature are also used to track and trend fall data and include:

Comparisons

Risk adjustment: A variety of rates found in the literature demonstrates the difficulties of comparing studies that use different calculations, and highlight the importance of comparing like rates and determining whether or not they are risk-adjusted. For example, compilations of several relatively recent research studies reported by Morse provide a range of fall rates (per 1,000 bed days) as 2.2 to 7 in acute care hospitals, 11.0 to 24.9 in long-term care hospitals, and 8.0 to 19.8 in rehabilitation hospitals. The range of injury rates (in percentages) has been reported to be 29 to 48, with 4% to 7.5% resulting in serious injuries. Other reviews suggest that the average rate for acute care hospitals is in the range of 2.5 to 3.5 falls per patient for every 1,000 bed days. In reviewing such studies, it is critical to note the method and whether the data are risk-adjusted.

There are a few sources of risk-adjusted data available for comparisons to other external organizations with “similar” populations, though it should be clear that a facility must risk-adjust its data using similar definitions to make the comparison. Risk-adjusted comparison rates can be found from performance measurement systems such as the Maryland Hospital Association Quality Indicator Project, although access to the information requires a subscription.

Trending: Although it is valuable to trend reported patient falls per 1,000 patient days, care should be taken when comparing patient care from unit to unit, or even individual units to the overall organizational rate, much less other organizations, unless rates are risk-adjusted. It may be more valuable to generate control charts for each of the units so that over time, each unit can determine whether their processes are stable. If they are not, the data should trigger an investigation to identify what are the possible causes and remedial actions. Regardless of whether processes are stable within a unit, areas that have relatively high reported fall rates should still look for ways to reduce their median fall rate. This process must consider the nature of the patient population and other factors so that the chosen strategies are appropriate. This approach supports the use of unit trends over time related to the implementation of strategies, and determination of whether selected strategies are effective.

A case study with an accompanying chart demonstrates how falls, interventions and improvements were measured, compared internally and benchmarked with the QI Project. See Sample Procedures and tools.