pupil#

Process and extract pupil

Sketch of default strategy:

  • Track to find approximate position of eyes with processors.Haar_Tracker

  • Mask image around both eyes, split processing in parallel L/R (if present)

  • Use white of eyes to mask cornea and pupil

  • Sigmoid filter images to separate cornea and pupil

  • Blob detection to find center mass of pupil

  • Compare blob vs. edge detection of pupil.

  • Use Kalman filter on perceptivo.types.units.Ellipse properties to avoid jumps and all that

class perceptivo.video.pupil.PupilExtractor(preprocessor: Optional[perceptivo.video.pupil.Preprocessor] = None, filter: Optional[perceptivo.video.pupil.PupilFilter] = None, **kwargs)#

Bases: perceptivo.root.Perceptivo_Object

Base class for pupil extraction strategies.

process(frame: perceptivo.types.video.Frame) Optional[perceptivo.types.pupil.Pupil]#

Call preprocess() and then _process(), returning a Pupil estimate

Parameters

frame (types.video.Frame) – Frame to process

Returns

types.pupil.Pupil Pupil Estimate

abstract _process(frame: perceptivo.types.video.Frame) perceptivo.types.pupil.Pupil#

Given a frame, extract a pupil estimate

Parameters

frame (types.video.Frame) – Frame to process!

Returns

Estimated Pupil!

Return type

types.pupil.Pupil

class perceptivo.video.pupil.PupilFilter(**kwargs)#

Bases: perceptivo.root.Perceptivo_Object

Base class for filtering pupil tracks – for example by using a Kalman filter to filter erroneous pupil detections from a PupilExtractor .

PupilFilters should be given to the PupilExtractor as its filter argument, and should be called last in the PupilExtractor.process() method.

Each subclass should implement a _process method that takes and returns a Pupil object.

process(pupil: perceptivo.types.pupil.Pupil) perceptivo.types.pupil.Pupil#
class perceptivo.video.pupil.Preprocessor(**kwargs)#

Bases: perceptivo.root.Perceptivo_Object

Base class for preprocessing images before they reach the main PupilExtractor.process() method.

Each subclass should implement a process method that takes and returns a Frame object.

abstract process(frame: perceptivo.types.video.Frame) perceptivo.types.video.Frame#
pydantic model perceptivo.video.pupil.EllipseExtractor_Params#

Bases: pydantic.main.BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Show JSON schema
{
   "title": "EllipseExtractor_Params",
   "type": "object",
   "properties": {
      "footprint_size": {
         "title": "Footprint Size",
         "default": 5,
         "type": "integer"
      },
      "search_scale": {
         "title": "Search Scale",
         "default": 1.5,
         "type": "number"
      }
   }
}

Fields
field footprint_size: int = 5#
field search_scale: float = 1.5#
class perceptivo.video.pupil.EllipseExtractor(footprint_size: int = 5, search_scale: float = 1.5, **kwargs)#

Bases: perceptivo.video.pupil.PupilExtractor

Very simple extractor that estimates an ellipse from the edges of a pupil. This extractor assumes that the Frame given to it has very high contrast, ie. that the rest of the face is essentially white and the pupil and cornea are the only colored things in the image.

In order

References

Parameters
  • footprint_size (int) – Diameter of footprint (a skimage.morphology.disk() ) used in the median filter skimage.filters.rank.median() . This should be roughly the size of the pupil.

  • search_scale (float) – If present, how much to scale the PupilExtractor.filter.last_pupil to select edges before fitting ellipses. Eg. 1.5 enlarges the last ellipse by 1.5 and rejects all edges outside of that radius.

  • **kwargs ()

__init__(footprint_size: int = 5, search_scale: float = 1.5, **kwargs)#
Parameters
  • footprint_size (int) – Diameter of footprint (a skimage.morphology.disk() ) used in the median filter skimage.filters.rank.median() . This should be roughly the size of the pupil.

  • search_scale (float) – If present, how much to scale the PupilExtractor.filter.last_pupil to select edges before fitting ellipses. Eg. 1.5 enlarges the last ellipse by 1.5 and rejects all edges outside of that radius.

  • **kwargs ()

property footprint_size: int#

As described in the attr docstring. When setting a footprint size, remake the footprint

Returns

int

filter_edges(edges: numpy.ndarray) numpy.ndarray#

Set all edges outside of a search radius, given our previous ellipse, to zero

choose_ellipse(edges: numpy.ndarray, frame: numpy.ndarray) Optional[skimage.measure.fit.EllipseModel]#

Given an array of edge labels (from skimage.morphology.label()), usually from _process(), return an skimage.measure.EllipseModel , choose the one that is the pupil

Parameters
  • edges () – An array of image labels, ie. an array of ints where background == 0, edge 1 == 1, and so on.

  • frame () – The original or filtered image frame (the array, not the Frame object)

Returns

the most pupil-like ellipse

Return type

skimage.measure.EllipseModel

class perceptivo.video.pupil.EnsembleExtractor_NonIR(sigmoid=(0.5, 5), canny_kwargs: Optional[dict] = None, hough_kwargs: Optional[dict] = None, *args, **kwargs)#

Bases: perceptivo.video.pupil.PupilExtractor

Extractor that uses an ensemble of techniques to track a pupil.

Written before realizing the trakcing problem was much easier using IR! Kept to mine parts from before discontinuing

  • Track to find approximate position of eyes with processors.Haar_Tracker

  • Mask image around both eyes, split processing in parallel L/R (if present)

  • Use white of eyes to mask cornea and pupil

  • Sigmoid filter images to separate cornea and pupil

  • Blob detection to find center mass of pupil

  • Compare blob vs. edge detection of pupil.

  • Use Kalman filter on perceptivo.types.units.Ellipse properties to avoid jumps and all that

preprocess(frame: perceptivo.types.video.Frame) perceptivo.types.video.Frame#
_bbox_from_circle(circle: List[int])#

convert opencv’s circles to a bounding box (top, bottom, left, right)

class perceptivo.video.pupil.Pupil_Extractors(value)#

Bases: enum.Enum

An enumeration.

simple = <class 'perceptivo.video.pupil.EllipseExtractor'>#
perceptivo.video.pupil.get_extractor(extractor=<enum 'Pupil_Extractors'>) Type[perceptivo.video.pupil.EllipseExtractor]#
Parameters

extractor (str, Pupil_Extractors) – str corresponding to one of the entries in Pupil_Extractors, eg 'simple'

Returns: