Details And Proposals For Object Detection: Difference between revisions

From SpaceElevatorWiki.com
Jump to navigationJump to search
(New page: == Object Detection == Here we have the most complicated situation you can have: We want to detect moving objects in a video stream of moving cameras. Usually detecting moved objects is ...)
(No difference)

Revision as of 15:39, 5 July 2008

Object Detection

Here we have the most complicated situation you can have: We want to detect moving objects in a video stream of moving cameras.

Usually detecting moved objects is done from cameras not in motion. In this situation you can use successive frames to get differential images. Unchanged regions in the image become dark, the remaining regions are edges from objects in motion. So you can use the differential images as mask to the current images and get the edges to detect. Ok, you have to combine all the edges correctly to get the real objects, but if you know what kind of object it could be (cars), it is possible do to.

This simple situation isn't given, if the camera itself is in motion. In the real world it would be very complicated to build a system, being able to work perfect here. But we don't start in the real world!

To understand what do to, lets's have a look to the World Viewer. Our simulation contains all objects of the fictive world. These objects are bodies and their visible outside is defined by textures mapped on the surface of it. The World Viewer renders the images of all the bodies as seen from an given point.

To be able to detect moving objects from moving camera's images, you have to do:

  • Calculate the estimated new position and rotation of the camera.
  • Calculate the detected object's estimated position and rotation in respect of the new images coordinates.
  • Render the image as you expect to see it from the new position.
  • Than use the default procedure like with a camera not in motion.

In the real world this is a hard to do work, but in our situation, we have it perfectly if we do nothing!

Why?

The image we want to calculate here is just the one the world viewer renders, at least for the not moved part of the world! To get what we would have in reality (with a perfect system), we only have to use a trick: Combine the static part of the world as seen from the new timestep's position with the image of the dynamic part of the world in the last timestep's positions but seen from the new position.

This is what we would get in reality. Now we can compare it with the image of the hole world as seen from the new timestep's position. Again we can get the differential images and find the moving objects.

Simple isn't it?

Just in opposite to the real life, we start with the perfect system and if all the stuff works ok, we can add uncertainty. We can simulate imperfect motion sensors, noise, light changes etc., just all the hard things we would have in reality. But we can do it in steps and controlled to find out what to do to handle it well.