Detector, track pointer, and perceiver classes for hand tracking.
Follows the structure of the Puzzle Scene and Glove perceivers, which packages everything into one module file since python has individual import facilities, and placing in one uniform location simplifies things.
Code here is copied from the Glove tracker classes but removes the color components and presumes that only depth information is available for identifying what is above a work surface. Since the color segmentation part is removed, this method permits augmentation by a binary mask to provide more context for what "hovering" pixels might be associated to a hand. The reason being that depth is not so precise and fingers/hand regions close to the surface do not register as "hovering."
The optional mask will operate sequentially and cannot exploit parallel operation of things, at least if provided as input. This way a higher level perceiver could run some things is pseudo-parallel.
What should be contained in this file would be:
- Hand layer detector from RGBD input.
- Hand trackpointer based on layered detector output.
- Perceiver that combines detector + trackpointer.
A calibration scheme for the entire process with saving to YAML and HDF5 files.
- Todo:
Add separate option to apply the mask after detect, before track?
Did a bum rush through code to see if could finish up fast. Needs a follow-up review and revision, especially as regards integration of hand segmentation/detection.