This guide covers PyPTV’s splitter mode functionality for stereo camera systems using beam splitters.
Splitter mode is designed for stereo PTV systems where a single camera is split using a beam splitter to create two views of the same region. This technique is commonly used to achieve stereo vision with a single camera sensor.
PyPTV uses a modern conda environment (environment.yml
) and separates tests into headless (tests/
) and GUI (tests_gui/
) categories. See the README for details.
Use splitter mode when:
Enable splitter mode in your YAML configuration:
num_cams: 4 # Even though it's one physical camera
ptv:
splitter: true
imx: 512 # Half width - will be width of splitted
imy: 512 # Half height - will be height of splitted
img_name: img/unsplitted_%d.tif
cal_ori:
cal_splitter: true
img_cal_name:
- cal/unsplitted.tif # unsplitted image
plugins:
selected_tracking: ext_tracker_splitter
available_tracking:
- default
- ext_tracker_splitter
In splitter mode, PyPTV automatically:
So far it’s fixed into 4, but probably can work for 2
Profidve the unsplitted image and check in the GUI option the splitter will work automatically
If needed, manually split calibration images:
Look into the plugins/ folder there is an example of manual splitting but this obsolete now.
Configure image sequence for splitter processing:
sequence:
base_name:
- img/splitter.%d # Single image file per frame
first: 1
last: 100
# Or for pre-split images:
sequence:
base_name:
- img/left.%d # Left view sequence
- img/right.%d # Right view sequence
first: 1
last: 100
Tune detection for each split view:
detect_plate:
gvth_1: 40 # Threshold for left view
gvth_2: 45 # Threshold for right view (may differ)
min_npix: 20
max_npix: 200
Configure stereo correspondence:
criteria:
corrmin: 50.0 # Higher threshold for stereo matching
cn: 0.01 # Tighter correspondence tolerance
eps0: 0.1 # Smaller search window
The ext_tracker_splitter
plugin provides specialized functionality:
# Example plugin functionality (simplified)
class SplitterTracker:
def process_frame(self, image):
# Split image into left and right views
left_view, right_view = self.split_image(image)
# Detect particles in each view
left_particles = self.detect_particles(left_view)
right_particles = self.detect_particles(right_view)
# Perform stereo matching
matched_pairs = self.stereo_match(left_particles, right_particles)
# Reconstruct 3D positions
positions_3d = self.reconstruct_3d(matched_pairs)
return positions_3d
Create custom plugins for specialized splitter setups:
# plugins/my_splitter_plugin.py
def my_splitter_sequence(frame_data):
"""Custom sequence processing for specific splitter setup"""
# Custom splitting logic
left_view = extract_left_view(frame_data)
right_view = extract_right_view(frame_data)
# Apply custom preprocessing
left_processed = preprocess_view(left_view)
right_processed = preprocess_view(right_view)
return [left_processed, right_processed]
Issue: Poor stereo matching between split views Solution:
Issue: Inconsistent detection between views Solution:
Issue: Calibration residuals too high Solution:
Test your splitter setup:
For time-resolved measurements:
sequence:
base_name:
- img/splitter_early.%d
- img/splitter_late.%d # Different timing
first: 1
last: 100
Combine splitter mode with multi-camera setups:
num_cams: 4 # 2 physical cameras, each with splitter
ptv:
splitter: true
# Configure as 4 logical cameras
sequence:
base_name:
- img/cam1_left.%d
- img/cam1_right.%d
- img/cam2_left.%d
- img/cam2_right.%d