patient#
entrypoint for patient interface
- class perceptivo.runtimes.patient.Patient(audio_config: perceptivo.types.sound.Audio_Config = Audio_Config(fs=44100), audiogram_model: Optional[perceptivo.types.psychophys.Psychoacoustic_Model] = None, picamera_params: Optional[perceptivo.types.video.Picamera_Params] = None, oracle: Optional[callable] = None, pupil_extractor: Optional[perceptivo.video.pupil.Pupil_Extractors] = None, pupil_extractor_params: Optional[perceptivo.video.pupil.EllipseExtractor_Params] = None, networking: Optional[perceptivo.types.networking.Patient_Networking] = None, prefs_file: pathlib.Path = PosixPath('/home/docs/.perceptivo/prefs.json'), **kwargs)#
Bases:
perceptivo.runtimes.runtime.Runtime
Runtime agent for the patient-facing Pi (see SoftwareOverview).
Runs the
Sound Server
PiCamera
Processing stages including pupil extraction, psychoacoustic model, and stimulus manager
On intialization, boot the sound server and rehydrate the psychoacoustic model from the parameterization passed in
audiogram_model
. The patient runtime is parameterized by theperceptivo.prefs.Patient_Prefs
object, which creates and reads from aprefs.json
file (located atperceptivo.prefs.Directories.prefs_file
).The basic operation of the Patient runtime is encapsulated in the
trial()
method, see that for further documentation.- Parameters
audio_config (
Jackd_Config
) – Configuration used to boot the jackd serveraudiogram_model (
Psychoacoustic_Model
) – Model parameterization used to model the audiogram as well as generate optimal stimuli to sampleoracle (callable) – Optional, if present use an oracle to generate responses to stimuli rather than getting them from the pupil extraction method. Mostly for testing, takes a function that accepts a
Sound
object and returns a boolean response, typically generated by functions inoracle
likereference_audiogram()
- prefs_class#
alias of
perceptivo.prefs.Patient_Prefs
- trial() perceptivo.types.psychophys.Sample #
One complete loop through a probe cycle. In order:
check if a previous trial is still running using the
_trial_active
event, if so, return, logging an exceptionclear the lists that collect pupil samples:
_frames
and_pupils
next_sound()
to parameterize the next sound, returning atypes.sound.Sound
object, based on the output of theAudiogram_Model.next()
methodprobe()
to deliver the sound and collect the response. Within the probe method:the
Picamera_Process.collecting
flag is set to indicate that it should dump frames into its queuethe sound is played with
play_sound()
the
await_response()
method spawns a_collecting_thread
, which calls_collect_frames()
to pull frames fromPicamera_Process.q
and process them withpupil_extractor
until the queue is empty.types.video.Frame
s andtypes.pupil.Pupil
s are appended to the_frames
and_pupils
collectorsonce the thread finishes, the picamera’s collection event is cleared, and the
types.pupil.Pupil_Params
, which set the threshold of dilation that constitutes a positive response to the sound is updated with_update_pupil_params()
The
Pupil_Params
,Sound
, and list ofPupil
objects are collected into aDilation
object and returned
the
probe()
method then combines theSound
andDilation
objects into aSample
object, which is then appended to thesamples
attrFinally, the
model
is updated with theupdate_model()
Stores the
Samples
insamples
, which also include the parameterizations and timestamps of the presented sounds
- next_sound() perceptivo.types.sound.Sound #
Generate the next sound using the psychoacoustic
model
- Returns
Sound
to play
- probe(sound: perceptivo.types.sound.Sound) Optional[perceptivo.types.psychophys.Sample] #
One loop of
Presenting a sound stimulus
Signaling to the other Pi to present a visual stimulus
Estimating the Pupil Response
- play_sound(sound: perceptivo.types.sound.Sound) perceptivo.types.sound.Sound #
Play a parameterized sound
- Parameters
sound ()
- Returns
- await_response(sound: perceptivo.types.sound.Sound) Optional[perceptivo.types.pupil.Dilation] #
Wait until we are given a pupil from the picamera process
- Returns
bool
- _collect_frames(start_time: datetime.datetime)#
Collect frames from the picamera for one sample
- handle_message(message)#
Handle a message by calling some method according to its
key
attribute- Parameters
message (bytes) – a serialized
networking.messages.Message
object
- cb_control(control: Union[perceptivo.types.gui.GUI_Control, Dict[str, perceptivo.types.gui.GUI_Control]])#
Handle GUI Control messages.
- Parameters
control ()
Returns:
- cb_start(params: Union[perceptivo.types.exam.Exam_Params, Dict[str, perceptivo.types.exam.Exam_Params]])#
Start the exam!
- Parameters
params (
types.exam.Exam_Params
) – Parameters to run the exam!
- cb_stop(value=None)#
Stop the exam :Parameters: value ()
Returns:
- _update_pupil_params(pupils: List[perceptivo.types.pupil.Pupil]) perceptivo.types.pupil.Pupil_Params #
- Parameters
pupils ()
Returns:
- _init_audio() Union[autopilot.stim.sound.jackclient.JackClient, soundcard.pulseaudio._Speaker] #
Start the jackd process, connect a client to it!
- Returns
autopilot.stim.sound.jackclient.JackClient
- A booted jack client!
- perceptivo.runtimes.patient.patient_parser(manual_args: Optional[List[str]] = None) argparse.Namespace #
- perceptivo.runtimes.patient.main()#