Searching...

Improving Nose-Tail tracking in EthoVision XT


How Nose-Tail Tracking Works

EthoVision XT can use two different methods to track the nose and tail:

  • Contour-based detection looks at the outline of the area detected as the animal to find the correct shape of the nose and tail.
  • Deep learning uses a trained neural network model to identify the nose and tail of a mouse or rat. This method, available in EthoVision XT 16 and up, uses the image of the animal and can accurately place the nose and tail even outside the yellow detected area of the animal. Deep learning requires a suitable NVIDIA GPU; if that is not present, contour-based detection is required. When deep learning is available, and you are working with rodents, it will generally give the best results.

Contour-Based Detection

When using contour-based detection, it's important to adjust the detection settings so that the yellow area (which indicates everything detected as the animal) corresponds as closely as possible to the real outline of the animal, since the outline is being used to identify the points.

In the Detection Settings, under the method there is a pulldown selection of the algorithm to identify the nose and tail points. The options are:

  • Any species / Default: This is a general method that works for many animals. If tracking rodents, make sure the tail is detected. (In EthoVision XT 14 and earlier, this was called "Shape-Based / Default".)
  • Rodents / Default: This method fits a model rodent shape to the detected outline. It is more robust for rodents than the above method because it does not require the nose and tail to be visible. It's a good choice when tracking a single rodent without occlusions or difficult tracking conditions, for example in an open field, water maze, or novel object test. (In EthoVision XT 14 and earlier, this was called "Simple Model / Rodents".)
  • Rodents / For Occlusions: This method works better than the default method if parts of the animal will be occluded from view, e.g., by bars in a cage or by another animal. It is more computationally demanding however. It works best if the animal's tail is not tracked (you can use the Erosion and Dilation settings to adjust this). (In EthoVision XT 14 and earlier, this was called "Advanced Model / Rodents".)
  • Adult Fish / For Occlusions: This method is best for adult zebrafish and fish of similar shape, when viewed from above. Make sure the whole body of the fish is detected, including the tail. (In EthoVision XT 14 and earlier, this was called "Advanced Model / Adult Fish".)
  • Other Species / For Occlusions: This method may work better for other types of animals, or fish from the side. (In EthoVision XT 14 and earlier, this was called "Advanced Model / Other".)

Optimizing the "Rodents / For Occlusion" method

When using this method (called "Advanced Model / Rodents" in EthoVision XT 14 and earlier), you can improve the results by telling EthoVision the size of your animals. To do so:

  • In the Detection Settings, first optimize the detection of the animal's body so that the detected area (in yellow) matches the real outline of the animal as closely as possible.
  • Click the Advanced button next to Subject Size.
  • In the middle panel of the window that opens, labeled "Modeled Subject Size", the size of the currently-detected animal will be displayed under Current Pixels. You can either copy this value to the Average Pixels by clicking the Grab button, or click on that value and edit it directly. You want the Average Pixels to be a reasonable estimate of the typical size of the animal. (The actual size will vary from frame to frame based on the posture of the animal, but try to set it close to the average.) 
  • Then, check the checkbox under Fix.

This will often improve the accuracy of nose-tail tracking when using this method.

If you are tracking multiple animals in the same arena, and EthoVision sometimes identifies them as a single large animal when they are in contact, reducing the Average Pixels value may help.

Deep Learning

Deep learning will work best when the outline of the animal is visible everywhere in the arena. As always, you will get best results when the lighting is relatively uniform. The animal should be at least 30 pixels in length to provide enough detail.

When using deep learning, there are only two new settings in the Detection Settings. There is a checkbox for "Hooded rats" which should be used if your animals are a mixture of black and white rather than a uniform color. There is also a Define button which allows you to specify the Cutout size. Clicking the Define button should show you an image of your animal with a yellow box around it. The yellow box should extend 0.5-1 body lengths beyond the animal in all directions (not including the tail). You can click the Automated button to set it automatically, or adjust it manually with a slider. This tells the model how big to expect your animal to be.

General Advice

If the center of the animal is tracked accurately, and the nose and tail are tracked accurately most of the time, but there are quite a few nose-tail swaps, the following suggestions can significantly improve tracking of the nose and tail:

  • When you draw the arena, take care that it is big enough (but not too large that you bring external factors into play like dark or light edges the system may mistake for your animal), also to include when the rat is rearing up the sides. If the rat can stick its nose outside the arena, EthoVision will not be able to detect its nose.
  • If you have reflections on your side walls, which the system is mistaking for your animal (or merging the animal and its reflection into one big shape), one option can be to lightly sand the walls of the arena, so they are not so shiny.
  • The contrast setting should be set such that the entire animal is detected. You may find that a higher contrast setting accurately finds the center of the animal but misses the edges. In this case, EthoVision cannot see the outline of the animal, and will have trouble identifying the nose and tail from that outline. Set the contrast to the lowest setting possible that still correctly identifies the animal without much noise in the arena.
  • For best results the lighting should be even, indirect and diffuse. Please see the manual for details about this. The diagram in the manual about placing lights in relation to a water maze is also applicable here. If the lighting is uneven, the appropriate contrast levels may vary between different areas of the arena. Also, uneven lighting will create shadows which will affect the outline of the animal.
  • You do not need bright lighting. Opening up the aperture (diaphragm) of your camera a little may improve the contrast without changing the lighting.
  • For nose-trail tracking it is often best to track at the maximum sample rate (25-30 samples per second), so that the tracker has as much information as possible.
  • Sometimes, especially if the load on the computer is high, you may get better results tracking from a video file. If you find that is the case, you can use 2-stage tracking (first acquire the video with EthoVision, then track from that video). When you track from video, always use the Detection Determines Speed setting in acquisition.