The system - based on ideas published in the October proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics - allows users to capture images of a scene from multiple angles and automatically sort the images based on their three-dimensional contents. By simply touching an image on a screen, the user can pinpoint a face, suitcase or any point of interest to explore further from different vantage points.
The technology, developed by a team of researchers led by graduate student Sam Mavandadi and Professor Parham Aarabi of U of T's Edward S. Rogers Sr. Department of Electrical and Computer Engineering, could be used to track objects or individuals in airports, casinos and other large environments. "The system allows an operator to quickly and easily search and explore a large environment observed by numerous cameras," says Mavandadi. "It can also be used in movie and television studios to assist directors in selecting the best vantage point to film."
The system creates a three-dimensional model by assigning relevance values to every point within the user's selected area of interest. These values are then used to select the most relevant cameras observing an object or individual. While the prototype has been built, the next step is to design the system for commercial use.
The research was funded by the Canada Research Chair program.
CONTACT: Professor Parham Aarabi, Edward S. Rogers Sr. Department of Electrical and Computer Engineering, 416-946-7893, email@example.com or Karen Kelly, U of T public affairs, 416-978-6974, firstname.lastname@example.org