Carnegie Mellon system lets you get to the good parts of video, fast

Algorithm sorts through tedious GoPro, Google Glass and smartphone videos to get to the good stuff

  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn
  • Share on Reddit
  • Share by Email
  • Print resource

While Video has become ubiquitous thanks mostly to smartphones it doesn’t mean you want to actually watch all of it.

Carnegie Mellon University computer scientists say they have invented avideo highlighting technique called LiveLightthat can automatically pick out action in videos shot by smartphones, GoPro cameras or Google Glass users.

+More on Network World: +

LiveLight constantly evaluates action in a video, looking for visual novelty and ignoring repetitive or eventless sequences, to create a summary that enables a viewer to get the gist of what happened. What it produces is a miniature video trailer. Although not yet comparable to a professionally edited video, it can help people quickly review a long video of an event, a security camera feed, or video from a police cruiser's windshield camera, according to Carnegie researchers.

“A particularly cool application is using LiveLight to automatically digest videos from, say, GoPro or Google Glass, and quickly upload thumbnail trailers to social media. The summarization process thus avoids generating costly Internet data charges and tedious manual editing on long videos. This application, along with the surveillance camera auto-summarization, is now being developed for the retail market byPanOptus Inc., a startup founded by the inventors of LiveLight,” the researchers stated.

The LiveLight video summary occurs in "quasi-real-time," with just a single pass through the video. It's not instantaneous, but it doesn't take long — LiveLight might take 1-2 hours to process one hour of raw video and can do so on a conventional laptop. With a more powerful backend computing facility, production time can be shortened to mere minutes, according to the researchers.

Calling it the “ultimate unmanned tool for unlocking video data” the Carnegie researcher said LiveLight’s algorithm processes the video and compiles a dictionary of its content. The algorithm then uses the learned dictionary to decide in a very efficient way if a newly seen segment is similar to previously observed events, such as routine traffic on a highway. Segments thus identified as trivial recurrences or eventless are excluded from the summary. Novel sequences not appearing in the learned dictionary, such as an erratic car, or a traffic accident, would be included in the summary, the researchers stated.

+More on Network World: +

Though LiveLight can produce these summaries automatically, people also can be included in the loop for compiling the summary. In that instance, LiveLight provides a ranked list of novel sequences for a human editor to consider for the final video. In addition to selecting the sequences, a human editor might choose to restore some of the footage deemed worthless to provide context or visual transitions before and after the sequences of interest.

The ability to detect unusual behaviors within long stretches of tedious video could also be a boon to security firms that monitor and review surveillance camera video, the researchers said.

Follow Michael Cooney on Twitter:nwwlayer8and onFacebook

Check out these other hot stories:

100Mb/sec Ethernet coming to a car near you?

US intelligence agency wants brain-like algorithms for complex information processing

NASA bolsters Pluto-bound spacecraft for 2015 visit

FTC taking robocall death hunt to DEFCON

Mobile phone bill crammers get stuffed with $10 million property forfeiture

NASA forming $3M satellite communication, propulsion competition

Copyright © 2014 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022