Researchers at RIT are developing an improved visual tracking system that can more accurately locate and follow moving objects under surveillance. Using deep learning, an artificial intelligence technology, the system could generate more reliable readings of moving objects--even of those objects become obscured or change patterns and direction.
Andreas Savakis, a professor of computer engineering, is developing the technology through advanced adaptive visual tracking algorithms for aerial platforms, a three-year, $250,000 project funded by the Department of the Air Force's Materials Command/Systems and Technology. It is one of several inter-related projects that use deep learning to refine the computing algorithms and visual tools needed to follow object movements and understand activities.
Although many systems have the ability to recognize and classify various objects, it is essential to track them under variations in illumination or appearance to re-acquire them when they are occluded by other objects, said Savakis. This research has potential applications in autonomous navigation, drones, traffic monitoring, safety and security, disaster response and human-computer interaction.
Some challenges to current imaging technology and the need for improvements rise from objects seeming to change size and appearance because of distance and perspective, or lighting which can produce different colorings of objects. Researchers will be training the system to understand each of these distinct aspects by leveraging the power of deep neural networks and visual tracking algorithms.
"We'll be able to learn more about the objects as we track them," Savakis explained. His team will produce a prototype tracking system, building upon earlier research conducted on video analytics. Systems are supplied with this type of data to build a knowledgebase and distinguish the variety of visual imagery a computing system is asked to detect and track, most being complex scenes that are continually changing.