top of page

Mouse Visual Operant Task


This task uses Psychophysics Toolbox (written in MATLAB) to deliver visual grating stimuli and the OM1 shield on the arduino in order to measure an animal's response to the stimulus and deliver rewards or punishments.


The Mouse Visual Operant task uses the OM1 shield, which provides the following functions:

- 4 channel H-bridge to control up to 4 solenoid valves for delivering multiple liquid stimuli/rewards , or airpuff stimuli.

- inputs for either lever press choice or two different types of lick sensors for either single port or two-alternative forced choice tasks.

- output for servo motor, to control access to lever or lick ports.

- speaker connection for delivering associated CS (i.e. reward or punishment tones).


MATLAB, running on a PC laptop or desktop, is used to run Psyhophysics Toolbox, which delivers a visual grating stimulus with variable orientation and contrast. The Arduino/OM1 shield is connected to this same computer via USB connection. After completion of the visual stimulus trial, a serial byte is sent over the USB serial connection from MATLAB to the Arduino to tell it to start looking for animal responses (lick or lever press). After the completion of the choice and reward/punishment epoch, the Arduino then sends a serial byte back to MATLAB that tells it the outcome in the trial, which is then recorded in a text file by MATLAB, and subsequently cues the start of the next trial of visual stimulus.


Here is the MATLAB code.


Here is the Arduino code.




NOTE on MATLAB/Psychophysics Toolbox:

As we have mentioned, although we love Psychophysics Toolbox, we don't love how much MATLAB costs, especially if you are trying to create an array of training boxes (one of the chief aims of our general approach). We are thus exploring the use of Psychopy, a Python package that allows one to deliver visual stimuli much the way Psychophysics Toolbox does. Furthermore, in order to more easily scale up the number of training setups for large numbers of mice, we are also currently exploring the use of SoC computers to deliver the visual stimuli. This would allow a researcher to have a small, self-contained behavioral setup for this type of task. Presently, we are having problems with running the full OpenGL package necessary to use Psychopy, which is only even possible on the UDOO board (I think), so we haven't gotten this to work but we are exploring other options and configurations.


If you have any thoughts on this please contact us at:





bottom of page