processors#
Individual transformation operations for video frames.
To be used with the perceptivo.video.pupil.PupilExtractor
subclasses
- class perceptivo.video.processors.Processor(**kwargs)#
Bases:
perceptivo.root.Perceptivo_Object
Individual processing stage, can be added together to make a processing chain
__add__
method based onautopilot.transform.transforms.Transform
- property parent: Optional[perceptivo.video.processors.Processor]#
If this Transform is in a chain of transforms, the transform that precedes it
- Returns
Transform
,None
if no parent.
- abstract process(input: Union[perceptivo.types.video.Frame, perceptivo.types.units.Ellipse]) Union[perceptivo.types.video.Frame, perceptivo.types.units.Ellipse] #
Process a frame!
Typically you want a chain of processors to end up outputting an Ellipse, but this is not enforced
Returns:
- __add__(other)#
Add another Transformation in the chain to make a processing pipeline :Parameters: other (
Transformation
) – The transformation to be chained
- class perceptivo.video.processors.Canny(blur_sigma: float = 1, low_thresh: float = 0.2, high_thresh: float = 0.5)#
Bases:
perceptivo.video.processors.Processor
Canny edge detection.
Slight modification of
skimage.feature.canny()
, but using opencv and Scharr kernel rather than sobel for better orientation invariance, and also using the eigenvectors of the structure tensor rather than the simple hypotenuseThe source for this class is really blippy because it is optimized for speed!
- Variables
References
- process(frame: perceptivo.types.video.Frame) perceptivo.types.video.Frame #
Process a frame!
Typically you want a chain of processors to end up outputting an Ellipse, but this is not enforced
Returns:
- class perceptivo.video.processors.Haar_Tracker(tracker_type: str = 'eye', min_neighbors: int = 20, adaptive_neighbors: bool = True, **kwargs)#
Bases:
perceptivo.video.processors.Processor
Download and use a haar cascade to track.
Many trained cascade .xml files are available at https://github.com/opencv/opencv/tree/master/data/haarcascades
References
- XML_URLS = {'eye': 'https://raw.githubusercontent.com/opencv/opencv/415a42f327104653604fc99314eb215cd938d6d7/data/haarcascades/haarcascade_eye.xml', 'face_default': 'https://raw.githubusercontent.com/opencv/opencv/415a42f327104653604fc99314eb215cd938d6d7/data/haarcascades/haarcascade_frontalface_default.xml'}#
- property filename: pathlib.Path#
- class perceptivo.video.processors.Filtered_Hough(radii=(15, 30, 100), max_considered=3, peaks_kwargs: Optional[dict] = None)#
Bases:
perceptivo.video.processors.Processor
A hough transform to detect circles, returning the one that bounds the darkest area in the image
- process(edges: numpy.ndarray)#
Frame to process, along with edges from canny edge detection
- class perceptivo.video.processors.Filter_Circles(prior_bias=0.5)#
Bases:
perceptivo.video.processors.Processor
Filter Circles!
- Parameters
prior_bias (float) – how strongly to weight the similarity to the prior circle, if given
- __init__(prior_bias=0.5)#
- Parameters
prior_bias (float) – how strongly to weight the similarity to the prior circle, if given
- process(frame: perceptivo.types.video.Frame, circles, prev_eye=None)#
Process a frame!
Typically you want a chain of processors to end up outputting an Ellipse, but this is not enforced
Returns:
- perceptivo.video.processors.circle_to_mask(frame, ix, iy, rad)#