A massive amount of data from video sensors is collected in theater, and there arent enough analysts or time available to review. Reducing the amount of data or the number of sensors isnt the answer, and there will never be enough analysts. The solution lies in better automated capabilities that can identify areas and activities that require human analyst attention.
DARPAs Video and Image Retrieval and Analysis Tool (VIRAT) and Persistent Stare Exploitation and Analysis System (PerSEAS) programs may soon enable better warfighter analysis of huge amounts of data generated from multiple types of sensors. Bad guys do bad things, such as all the actions involved in burying an IED so it is activity that matters. This is especially true when bad guys look, dress, and drive vehicles like those around them, said Mita Desai, DARPA program manager for VIRAT and PerSEAS. The analysis tools to find these activities and the underlying actions just dont exist, which is why there is such interest in activity-based analysis and exploitation.
Text searching and algorithms for facial and other object recognition already exist. Up to now, finding actions of interest within previously untagged, raw video has been a resource drain and such a technical challenge as to seem impossible. Desai explained, The objectives of VIRAT and PerSEAS are NOT to replace human analysts, but to make them more effective and efficient by reducing their cognitive load and enabling them to search for activities and threats quickly and easily.
With VIRATs ability to identify and highlight key actions and PerSEASs ability to see dangerous combinations of actions as activities, analysts will soon be able to concentrate on more detailed reviews and understanding of the data.
The initial prototype of VIRAT works in both forensic mode (finding activities in thousands of hours of archived, untagged data) and streaming mode (highlighting actions as the video download is being viewed). Additional research and development work is underway for PerSEAS prior to deployment for field testing.
VIRAT is focused on full-motion video, from platforms such as Predator or Aerostats, allowing analysts to either monitor the live downlink for specific actions of interest or search an existing archive for past occurrences. There searches are conducted using a video clip as the input query. VIRAT finds actions that are short in duration and occur in small geographic areas. PerSEAS focuses on wide-area coverage, such as data from Constant Hawk, Gorgon Stare, ARGUS-IS and other persistent sensors. PerSEAS observes multiple actions over a long duration and large geographic regions to postulate complex threat activities. Algorithms from VIRAT provide some of the underlying capabilities within PerSEAS.
Explore further: Dartmouth contests showcase computer-generated creativity