Home

Search

Looking for something else?

How do drowning detection systems identify water incidents?

May 20, 2022

GUIDANCE

This article uses a Q&A format to answer some of the most commonly asked questions about how swimming pool drowning detection systems (DDS) work. The focus is primarily on camera-based drowning detection systems, with some coverage of the main points with respect to wearable DDS technology at the conclusion of this article. 

 

Do all drowning detection systems work in the same way?

No, drowning detection systems work differently due to differences in their hardware and software.

 

What types of hardware does DDS use?   

The hardware type determines the input data the DDS will receive. Hardware may include:

  • Continuous video footage
  • Infrared video footage
  • Heart rate and other biometric data
  • Global Positioning System (GPS)
  • Sonar

 

How does DDS software process the input data?  

Software design determines how the input data is processed via an algorithm. Software design may include:

  • The data points the system identifies and uses as indicators
  • The pre-set thresholds that determine whether a data point is significant or not
  • How data points are compared/synthesized
  • The pre-set thresholds that determine an activation/warning
  • How the algorithm updates according to the lifeguard input via the console
  • How the algorithm updates according to the lifeguard input at other locations

 

How can DDS software identify “objects” in the water?

Cameras receive data on the amount of light reflected by objects within a visual field. The software divides the visual field into squares (pixels). An image is formed by the amount of light detected in the visual field within each pixel. For systems reliant on cameras, the level of reflectivity between adjacent pixels helps to distinguish objects from the water.

Systems that have infrared data input can distinguish a pool user from the surrounding water via the difference in heat signature.

 

How does DDS software distinguish a pool user from other objects in the water?

Most camera-based algorithms use an object’s size and shape to distinguish a pool user from other object types. 

As human beings, pool users have a level of reflectivity that can be defined within a narrow range. A pool user’s body can be identified by identifying a mass of adjacent pixels which share similar levels of reflectivity and can be distinguished from the surrounding water. This method can be used by overhead and underwater cameras, and optimal performance is achieved when both are combined. Eng (2011) described two main methods of background modelling which can be used in camera-based systems to distinguish foreground objects (i.e. pool users) from their background (i.e. the water): 

  • Pixel-based methods such as the Pfinder system utilizing Gaussian distribution in YUV colour space as proposed by Wren (1997). This approach has been adapted for outside pool use by Friedman and Russell (1997), Stauffer and Grimson (1999, 2000), Haritaoglu et al. (2000) and Elgammal et al. (2002). A major limitation is their accuracy where there are significant background movements, such as in busy pools. 
  • Block-based methods measure the relationship between a pixel and a search window and have been proposed by Hsu et al. (1984) and Matsuyama et al. (2000). This method can distinguish changes in illumination and colour to ascertain movement patterns. DAWS uses this system. 

A common issue with systems reliant on reflected light is distinguishing the pool user from their shadow. There are several ways different data sources can act as workarounds:

  • Infrared heat signature data can be used. The pool user and their shadow cast different heat signatures.
  • Sonar data can be used. The audible reflection produced by a pool user is different to that produced by the pool bottom.  
  • Patterns in light reflectivity data over short periods can be used to track a pool user’s movements.

Human beings share a body temperature that exists within a narrow range. Systems that have infrared data input can distinguish a pool user from other objects via the difference in heat signature.

 

How does DDS software identify patterns in the pool user’s movement?

Algorithms can be programmed to recognize that objects of a certain size and shape represent a pool user who is horizontal or vertical in the water. Some systems are programmed to alert the lifeguard if a pool user remains in a vertical position for a protracted period.

A visual surveillance system involves an analysis of complex tasks such as the detection of targets under varying ambient conditions, the simultaneous detection and tracking of multiple targets, and the recognition of complex behaviours that vary between persons (Eng, 2011). A drowning detection system (DDS) is only as good as the quality of data its models are built on. A large, diverse dataset is essential to eliminating inconsistent detection performance between pool users. Programmers must understand the multitude of ways in which people can drown to avoid building systems built on false assumptions. 

Other systems are programmed to alert a pool supervisor if the object remains motionless or of limited motion within a pre-defined range for a protracted period. Menoud (1999), Meniere (2000), Guichard et al. (2001) and Lavast (2002) proposed system of underwater cameras that could be used to detect drowning at a late stage when the pool user had sunk to the bottom. This is the most common mechanism by which a DDS utilizing cameras will detect a pool user on the bottom of the pool.

Movement patterns can be determined by reference to changes in reflectivity data over short time periods. Some DDS utilizing overhead cameras are programmed to detect rapid/erratic movement such as that provided by some casualties in danger.  

Wang et al. (2020) reported the results of a sonar DDS which measured the position, trajectory, and water in the pool user's lungs to detect an incident. Systems utilizing sonar will distinguish objects from surrounding water by sending pulses of sound not audible to the human ear through the water. Human beings reflect more sound than water, enabling their location to be determined. The sound reflected can be measured using sensitive microphones.

Currently, I am not aware of a DDS that analyses movement patterns in an infrared heat signature.

 

How accurate are DDS?

Varying degrees of accuracy between systems have been reported, and claims should be robustly investigated by operators to understand the limitation and weaknesses of the system before any investment is made. The test methodology, methods and sample size are particularly important elements in assessing the reliability of claims as to the accuracy of a particular DDS. 

Systems that rely on the pool user being at the bottom of the pool have significant challenges in terms of the capability of camera-based DDS: 

  • Differences in floatation time - Certain user groups take longer to sink to the bottom of the pool. Women have greater natural buoyancy. Pool users who are obese have increased buoyancy. In-water cardiac arrest often means there are greater quantities of oxygen in the lungs following a loss of consciousness. Pool users who start the drowning process face down are likely to remain buoyant for longer due to becoming trapped in the lungs and can cause the epiglottis to block the trachea due to fear of water intake to the lungs. A study by Hunsucker and Davison (2013) reported a pool user taking between 3-102 seconds in various swimming pool conditions to sink to the bottom in pool water with a depth of 1.6 metres.  
  • Differences in pool conditions - Illuminance and visual pollution (i.e. shadows, specular and diffuse reflection) can all reduce accuracy. Poor water turbidity will further reduce accuracy. Systems are less accurate in salt water than in freshwater. Water turbulence will further exacerbate visual and water pollution issues. Deeper water poses the challenge of reduced illuminance whilst shallow water poses a different challenge of visual obstructions in the form of standing pool users. 
  • Differences in movement - Static pool users in shallow water can result in increased false detection rates. Systems may be more sensitive to fine motor movements such as agonal breathing and muscle spasms following cardiac arrest, delaying any early warning alarm. Other systems are poor at distinguishing pool users who are treading water, leading to increased false alarms. 

 

What actions can be taken to improve the accuracy of a DDS? 

The inclusion of the following parameters can increase the accuracy of the DDS: 

  • Movement range - Swimming has a larger displacement than distress, drowning, or treading water. 
  • Speed product - Chaotic movement  (velocity and trajectory) may provide early warning of an incident. 
  • Posture variation - Pool users in potential danger may demonstrate repetitive arm and leg movements corresponding to a continual change in the swimmer's posture. 
  • Activity variation - The ratio between the area of a best-fit ellipse and the size of the foreground silhouette over a temporal window may provide an early warning of an incident.
  • Size variation - Pool users in danger, show less consistency in foreground size. 
  • Submersion index - The position of a pool user below or above the water surface can be used to provide an early warning of a potential incident. Sinking swimmers typically exhibit higher colour saturation in skin appearance. The difference between a swimmer's mean colour saturation and minimum saturation can be tracked. 
  • Use of a polarizer - A polarizer is a well-known photographic technique to suppress virtual layers and reflections (Shurcliff and Ballard, 1964). Provided the position and orientation of the cameras and the reflecting surface are known, the polarizer can separate transmitted and reflected images off a semi-reflecting medium using an estimation of the inclination angle. 
  • Deep learning - In systems utilizing machine learning, the programmer will set the parameters by and how the system will change its own programming based on inputs to false alarms by lifeguards or positive detections resulting in rescue. Deep learning takes this one step further with the algorithm determining what to track and how to combine data points and the programmer confirming whether the algorithm has made a positive detection. Repeated over time, a DDS reliant on deep learning should substantially outperform a system based on machine learning, as patterns beyond present human comprehension can be used and relied on by the deep learning model. Deep learning models rely on very large quantities of swimming pool video data, some containing water incidents available. Keshi (2021) and Zhiyong et al. (2020) have both explored the application of a deep learning model to a drowning detection system. 

 

How does wearable DDS differ from camera-based systems? 

Some wearables identify a person in the water via their cardiac output. Others use global positioning system (GPS) data to determine that a person is in the pool. Worn correctly, the wearable distinguishes a pool user from other objects in the water by their cardiac output and rhythm.  Wearable DDS is reliant on GPS data, and the DDS identifies the wearable, not the pool user. If the pool user becomes separated from the wearable, the DDS will not identify that the pool user is in danger.

Some wearables complement GPS data with cardiac output data, providing a mechanism for alerting the pool supervisor if the pool user becomes separated from the wearable. Wearables relying on cardiac output data can determine whether a person is in danger by reference to their cardiac rhythm. The DDS can be programmed to identify certain pre-set patterns in cardiac rhythm before alerting a pool supervisor.

Wearables reliant on GPS data are presently unable to determine whether a pool user is on or under the surface of the water. These systems are also unable to detect fine motor movements such as the erratic movement of arms or legs.

 

References

Eng, H., Kar-Ann Toh, K., Kam, A., Wang, J. and Yau, W. (2003). An automatic drowning detection surveillance system for challenging outdoor pool environments. Ninth IEEE International Conference on Computer Vision

Friedman, N. and Russell, S. (1997). Image segmentation in video sequence: A probabilistic approach. Proceedings from the Thirteenth Conf on Uncertainty in Artificial Intelligence. 1302, 175-181. 

Guichard, F., Lavest, J., Liege, B., Meniere, J. (2001). Poseidon Technologies: The world’s first and only computer-aided drowning detection system. Procedure. IEEE Conf. Comp. Vision Patt. Recog., DT-8. 

Haritaoglu, I., Harwood, D. and Davis, L. (2000). W4: Real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis in Machine Intelligence. 22, 809-830.

Hunsucker, J. and Davison, S. (2013). Time required for a Drowning Victim to Reach Bottom. Journal of Search & Rescue. 1(1), 19-28.

Jiaying, H, and Jie. Z. (2019). Automatic Alarm System for Swimming Pool Drowning Based on ZigBee Wireless Positioning. Scientific and Technological Innovation. 13, 74-77.

Keshi, L. (2021). Construction Method of Swimming Pool Intelligent Assisted Drowning Detection Model Based on Computer Feature Pyramid Networks. Journal of Physics Conference Series. 2137. 

Menoud, E. (1999). Alarm and monitoring device for the presumption of bodies in danger in a swimming pool. USA Patent Number 5,886,630. 

Meniere, J. (2000). System for monitoring a swimming pool to prevent drowning accidents. USA Patent Number 6,133,838. 

Shurcliff, W. and Ballard, S. (1964). Polarized light. (Van Nostrand Momentum Books, Princeton, Volume 7). 

Zhiyong, G. Jinzhen, H., Du Chenggang, D. (2020). Pulmonary Nodule Detection Based on Feature Pyramid Networks. Computer Applications. 40(9), 99-104. 

Wren, C., Azarbayejani, A., Darrell, T. and Pentland A. (1997). Pfinder: Real-Time Tracking of the Human Body. IEEE Transactions on Pattern Analysis and Machine Intelligence. 19(7), 780785. 

 

Citation: Jacklin, D. 2022. How do drowning detection systems identify water incidents? Water Incident Research Hub, 20 May.