Position Tracking


The position tracking feature allows you to track the XY-coordinates and other properties of multiple persons. This is useful in many different stage and installation applications. In the following description we are assuming that we have the camera hanging above the tracking area. You can for example map coordinates to midi events and sound parameters. In many applications this feature has been used to send the tracked positions to external programs like Max/MSP via MIDI or OSC.

Position Tracking is not an easy task for a computer. Basically the computer doesn't have an understanding of a body. For the tracking function, Eyecon is looking at the 'static difference image'. it tries to find connected areas which are different to the stored background image, the so called 'blobs'. The EyeCon tracking feature is predictive and makes intelligent assumptions. Once the software has identified a blob, it tries to continuosly follow it by predicting the most likely position in subsequent videopictures. This makes tracking more robust. In case the tracked object is lost for a short time period due to lack of picture contrast or other circumstances, the last position is kept for a 'persistence' period.

If multiple persons have to be tracked new problems arise. In dance performances (and other situations) the tracked objects are sometimes getting really close. Technically that means two or more 'blobs' are visually merging in the video image, the computer actually just sees one object. If that happens in the central area of the tracked area EyeCon assumes that both objects are still present, as persons usually don't just dissappear. Now there's a problem! What happens if those merged blobs apart again? Well, the software doesn't know their identity, so we might loose track of who is who (which might also be fun).

Creating a position tracker

Creating a tracking object is as simple as creating a touchline or dynamic field. Just click on the tracker creation button in the element editor. As a visual representation of the tracker an eliptical object is created in the upper left corner of the video image. If you want to track more then one person, just keep creating as many trackers as you need. They are distinguished by an ID number which ranges from 1 to the number of objects you want to track.

What happens, when EyeCon is tracking people? While it searches for blobs, it is maintaining a list of tracked objects. First of all, Eyecon is trying to assign the currently found blobs to the already known persons. Their position information is then updated. If there are remaining unassigned blobs there might be a new object to track. EyeCon checks whether those blobs are big enough to form a new object and whether they are in the accepted creation area. This is usually the whole image, but in some cases we might also want to limit the space where new objects can be detected. New objects are added to the list of tracked objects. To keep a unique identifier, the list is storing the time of the creation of the object. This timestamp is a valueable tag to identify objects in the list. The last step in the list mangement will be the deletion of entries which are no longer present. If an entry in the object list is not supported by any detected blobs for more than a certain time it is most likely gone off and therefor deleted from the list.

Now it's easy to understand the meaning of the ID in the EyeCon tracker element properties. it's simply referring to the number in the tracked objects list. If you create the first tracker object it is by default related to the first entry in the list. Second tracker object related to second entry in the tracked objects list.


How is a tracking object fitting into Eyecons simple mapping scheme? Remember, Eyecon is basically trying to generate a logical yes/no information and a continuos number, preferably in the value range 0..1? Well, yes/no decision is obvious: if a tracked object is visible that will be a yes. But what's the continues number: At the moment there are the follwing modes:

X and Y coordinate: gives the x or y position. Due to the straighforward mapping in Eyecon you have to choose one coordinate to be the main controlling parameter for that element (for example for the volume of a sound). This is bit dissapointing in a three dimensional world already reduced to the two dimensions in the video image. But you can create two trackers and set them to the same ID if you want at least to use both catersian coordinates simultaneous. [By the way: You can choose the origin of the cartesian coordinate system by moving the graphical represantations of the tracker objects to different quadrants. If you move them to the right half of the video image, the origin of the x coordinate is moved to the right side of the tracked area. ahhm, not really functional yet].

Distance: A better approach: The distance between the person and the graphical representation of the tracker object is measured. Move the graphical object and make it a hotspot. The volume of a sound could now easily be mapped to the closeness to the hotspot.

Size: Another choice: use the size of the tracked object as control parameter.

Tracking options:

The whole tracking process has a bunch of global settings which are made in the tracking window. This can be opened from the menu (Window/Tracking) or by pressing F8 or Ctrl-T.

Minimum size: the minimum size of a 'blob' to be detected as a valid object. Represented by the size of a circle in the tracking window.

Image filtering: The amount of image filtering that is done before analysing blobs. This filter is trying to smoothen the edges of blobs. see the filter view to check the effect.

Persistence: A parameter determining how long an object is kept in the list of tracked objects when it is temporarily not visible.

Range: A parameter which determines how far from the center of every tracked object EyeCon is looking when it is analysing the next video image.

Creation area: The area where new objects can be detected. Represented by a red rectangle which by default is almost the size of the whole camera image.

Fallback mode: Decides what happens if objects with low IDs dissapear before the ones with a higher ID. The IDs can be either kept or be renumbered (this is called fallback here).