exam#

Types for controlling the administration of the examination

pydantic model perceptivo.types.exam.Completion_Metric#

Bases: perceptivo.types.root.PerceptivoType

A means of deciding whether the exam is completed or not

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Show JSON schema
{
   "title": "Completion_Metric",
   "description": "A means of deciding whether the exam is completed or not",
   "type": "object",
   "properties": {
      "log_likelihood": {
         "title": "Log Likelihood",
         "default": -70,
         "type": "number"
      },
      "n_trials": {
         "title": "N Trials",
         "type": "integer"
      },
      "duration": {
         "title": "Duration",
         "type": "number"
      },
      "use": {
         "title": "Use",
         "default": "any",
         "type": "string"
      }
   }
}

Config
  • json_encoders: dict = {<class ‘numpy.ndarray’>: <function pack_array at 0x7f07dd6cfdc0>, <class ‘datetime.datetime’>: <function PerceptivoType.Config.<lambda> at 0x7f07dd6c91f0>}

  • underscore_attrs_are_private: bool = True

Fields
field log_likelihood: Optional[float] = -70#

End exam when log likelihood of model is below this value

field n_trials: Optional[int] = None#

End exam after n trials

field duration: Optional[float] = None#

End exam after n minutes

field use: str = 'any'#

Name of which (non-None) metric to use. Default any for ending exam if any of the criteria are met

pydantic model perceptivo.types.exam.Exam_Params#

Bases: perceptivo.types.root.PerceptivoType

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Show JSON schema
{
   "title": "Exam_Params",
   "type": "object",
   "properties": {
      "frequencies": {
         "title": "Frequencies",
         "type": "array",
         "minItems": 1,
         "maxItems": 1,
         "items": [
            {
               "type": "number"
            }
         ]
      },
      "amplitudes": {
         "title": "Amplitudes",
         "type": "array",
         "minItems": 1,
         "maxItems": 1,
         "items": [
            {
               "type": "number"
            }
         ]
      },
      "iti": {
         "title": "Iti",
         "type": "number"
      },
      "iti_jitter": {
         "title": "Iti Jitter",
         "default": 0.1,
         "type": "number"
      },
      "completion_metric": {
         "title": "Completion Metric",
         "default": {
            "log_likelihood": -70,
            "n_trials": null,
            "duration": null,
            "use": "any"
         },
         "allOf": [
            {
               "$ref": "#/definitions/Completion_Metric"
            }
         ]
      },
      "allow_repeats": {
         "title": "Allow Repeats",
         "default": false,
         "type": "boolean"
      }
   },
   "required": [
      "frequencies",
      "amplitudes",
      "iti"
   ],
   "definitions": {
      "Completion_Metric": {
         "title": "Completion_Metric",
         "description": "A means of deciding whether the exam is completed or not",
         "type": "object",
         "properties": {
            "log_likelihood": {
               "title": "Log Likelihood",
               "default": -70,
               "type": "number"
            },
            "n_trials": {
               "title": "N Trials",
               "type": "integer"
            },
            "duration": {
               "title": "Duration",
               "type": "number"
            },
            "use": {
               "title": "Use",
               "default": "any",
               "type": "string"
            }
         }
      }
   }
}

Config
  • json_encoders: dict = {<class ‘numpy.ndarray’>: <function pack_array at 0x7f07dd6cfdc0>, <class ‘datetime.datetime’>: <function PerceptivoType.Config.<lambda> at 0x7f07dd6c91f0>}

  • underscore_attrs_are_private: bool = True

Fields
field frequencies: Tuple[float] [Required]#

Frequencies (Hz) to test in exam

field amplitudes: Tuple[float] [Required]#

Amplitudes (dbSPL) to test in exam

field iti: float [Required]#

Seconds between each trial

field iti_jitter: float = 0.1#

Amount to jitter trials as a proportion of the ITI (eg. 0.1 for an iti of 5s would be maximum 0.5s of jitter)

field completion_metric: perceptivo.types.exam.Completion_Metric = Completion_Metric(log_likelihood=-70, n_trials=None, duration=None, use='any')#

Metric that decides when the exam is over.

field allow_repeats: bool = False#

Allow repeated sounds