Skip to main content

Using patented tracking algorithms based on information theory, Snap Network Intelligence engine automatically finds the relationships between cameras’ fields of view, without requiring time consuming and often unreliable manual configuration.

Operating continuously at the sub-camera level, Snap finds which parts of those fields of view relate to every other camera. In practice this means two vital types of relationships are continually discovered:

  1. Pairs of regions within cameras that have a high probability of overlapping; and
  2. Pairs of regions within cameras that have a high probability of NOT overlapping.

This powerful analysis operates at large scale. For example, with a network of 1,000 cameras, each divided into 100 regions, there are 10 billion possible relationships; Snap finds the “relationship” needles in the haystack.

Armed with the information it has learned about the camera relationships, Snap is able to support key applications delivering real customer value, such as the ability for operators to track activity as it moves from camera to camera across the network, using Snap’s video pursuit feature.


With Snap’s user interface, the operator is able to focus completely on tracking a target, without needing to scan across multiple monitors or rely on detailed local knowledge of camera names, groups and locations.

Snap Force Multiplier operator screen

Centre of screen is the current camera view with person of interest. The highlighted pink box shows the relationship with another camera. A view of that camera appears on the left hand side. You can also see a green box, and that relates to the green camera view on the top left. As the target moves, the operator is prompted via this system of colour boxes to know what camera view is best to look at next while tracking a target.