Skip to content

Quickstart

psyphy logo

Active-learning-driven adaptive experimentation in psychophysics

Installation | Documentation | Examples | Contributing


Quick-start walkthrough — fit your first covariance ellipse

The snippet below shows the minimal end-to-end workflow: simulate a handful of oddity-task trials at a single reference point, fit the WPPM with MAP optimization, and visualize the result. No GPU needed — runs in under 2 min on CPU.

The complete runnable script is quick_start.py. A step-by-step explanation lives in the Quick-start example.

Imports

Imports
from psyphy.data import TrialData  # batched trial container
from psyphy.inference import MAPOptimizer  # fitter
from psyphy.model import (
    WPPM,
    GaussianNoise,
    OddityTask,
    OddityTaskConfig,
    Prior,
    WPPMCovarianceField,  # fast Σ(x) evaluation
)

Compute settings

Compute settings
1
2
3
4
5
6
MC_SAMPLES = 50  # MC samples per trial in the likelihood (full example: 500)
NUM_TRIALS = 100  # total simulated trials (full example: 4000 × 25)
NUM_STEPS = 200  # optimizer steps (full example: 2000)

learning_rate = 5e-4  # full example: 5e-5. The smaller the lr, the more steps
# are required.

Ground-truth model + simulate data

Ground-truth model
task = OddityTask(config=OddityTaskConfig(num_samples=int(MC_SAMPLES)))
noise = GaussianNoise(sigma=0.1)

# Set all Wishart process hyperparameters in Prior
truth_prior = Prior()
truth_model = WPPM(
    prior=truth_prior,
    likelihood=task,
    noise=noise,
)

# Sample ground-truth Wishart process weights
truth_params = truth_model.init_params(jax.random.PRNGKey(123))
Simulate data
# Simulate observed responses using the likelihood implied by the task
ys, p_correct = task.simulate(truth_params, refs, comparisons, truth_model, key=k_sim)

Build model and fit

Model definition
1
2
3
4
5
6
7
prior = Prior()

model = WPPM(
    prior=prior,
    likelihood=task,
    noise=noise,  # we use the same Gaussian noise as for the ground truth
)
Fit with MAPOptimizer
inference = MAPOptimizer(
    steps=NUM_STEPS,
    learning_rate=learning_rate,
    track_history=True,
    log_every=1,
)

map_estimate = inference.fit(model, data, init_params=init_params, seed=4)
# Protocol: ParameterPosterior, here point estimate

# optional: for visualization:
map_cov_field = WPPMCovarianceField(model, map_estimate.params)
# OUTPUT: Covariance Matrices (N, 2, 2) for plotting

Results

Covariance ellipses: ground truth (black), prior (blue), MAP fit (red)

Ground truth (black), prior sample (blue), and MAP-fitted (red) covariance ellipses at the single reference point.

Learning curve

Negative log-likelihood over optimizer steps.