|
def | __init__ (self, tModel_SG tModel=tModel_SG(), Params params=Params()) |
| The targetSG constructor. More...
|
|
def | calibrate (self, mode="ImgDiff", *args) |
|
def | loadMod (self, filename) |
|
def | measure (self, I) |
| Generate detection measurements from image input. More...
|
|
def | saveMod (self, filename) |
|
def | update_th (self, val) |
|
def | __init__ (self, appMod, fgIm) |
|
def | displayForeground_cv (self, ratio=None, window_name="Foreground") |
|
def | emptyState (self) |
| Return empty state. More...
|
|
def | getBackground (self) |
|
def | getForeGround (self) |
|
def | getState (self) |
| Return current/latest state. More...
|
|
def | __init__ (self, processor=None) |
|
def | info (self) |
| Provide information about the current class implementation. More...
|
|
def | __init__ (self) |
| Instantiate a detector Base activity class object. More...
|
|
def | adapt (self) |
| Adapt any internal parameters based on activity state, signal, and any other historical information. More...
|
|
def | correct (self) |
| Reconcile prediction and measurement as fitting. More...
|
|
def | detect (self, signal) |
| Run detection only processing pipeline (no adaptation). More...
|
|
def | emptyDebug (self) |
| Return empty debug state information. More...
|
|
def | getDebug (self) |
| Return current/latest debug state information. More...
|
|
def | predict (self) |
| Predict next state from current state. More...
|
|
def | process (self, signal) |
| Process the new incoming signal on full detection pipeline. More...
|
|
def | save (self, fileName) |
| Outer method for saving to a file given as a string. More...
|
|
def | saveTo (self, fPtr) |
| Empty method for saving internal information to HDF5 file. More...
|
|
Single-Gaussian based target detection with full covariance.
Interfaces for the target detection module based on the single Gaussian RGB color modeling for the target. Effectively, target object pixels are assumed to have similar RGB value, modeled by a single Guassian distribution. Test images are transformed, per the model, into a Gaussian with uncorrelated components by diagonalizing the covariance matrix. Foreground detection is done by thresholding each component independently in the transformed color space (tilt):
|color_tilt_i - mu_tilt_i| < tau * cov_tilt_i, for all i=1,2,3
tau is a threshold parameter.
The interface is adapted from the fgmodel/targetNeon.
def get_fg_imgDiff |
( |
|
bgImg, |
|
|
|
fgImg, |
|
|
|
th |
|
) |
| |
|
static |
Use the image difference to get the foreground mask
Args:
bgImg (np.ndarray, (H,W,3)): The background image
fgImg (np.ndarray, (H, W, 3)): The foreground image
th (int): The threshold for the image difference map
Returns:
mask [np.ndarray, (H, W)]: The foreground mask
Generate detection measurements from image input.
Base method really doesn't compute anything, but will apply image processing if an image processor is define. In this manner, simple detection schemes may be implemented by passing the input image through the image processor.
Reimplemented from inImage.