exam#

Types for controlling the administration of the examination

class perceptivo.types.exam.Completion_Metric(*, log_likelihood: float = - 70, n_trials: int = None, duration: float = None, use: str = 'any')#

Bases: perceptivo.types.root.PerceptivoType

A means of deciding whether the exam is completed or not

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

log_likelihood: Optional[float]#

End exam when log likelihood of model is below this value

n_trials: Optional[int]#

End exam after n trials

duration: Optional[float]#

End exam after n minutes

use: str#

Name of which (non-None) metric to use. Default any for ending exam if any of the criteria are met

class perceptivo.types.exam.Exam_Params(*, frequencies: Tuple[float], amplitudes: Tuple[float], iti: float, iti_jitter: float = 0.1, completion_metric: perceptivo.types.exam.Completion_Metric = Completion_Metric(log_likelihood=- 70, n_trials=None, duration=None, use='any'), allow_repeats: bool = False)#

Bases: perceptivo.types.root.PerceptivoType

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

frequencies: Tuple[float]#

Frequencies (Hz) to test in exam

amplitudes: Tuple[float]#

Amplitudes (dbSPL) to test in exam

iti: float#

Seconds between each trial

iti_jitter: float#

Amount to jitter trials as a proportion of the ITI (eg. 0.1 for an iti of 5s would be maximum 0.5s of jitter)

completion_metric: perceptivo.types.exam.Completion_Metric#

Metric that decides when the exam is over.

allow_repeats: bool#

Allow repeated sounds