Model¶
Package¶
model
¶
psyphy.model¶
Model-layer API: everything model-related in one place.
Includes
- WPPM (core model)
- Priors (Prior)
- Tasks (TaskLikelihood base, OddityTask)
- Noise models (GaussianNoise, StudentTNoise)
All functions/classes use JAX arrays (jax.numpy as jnp) for autodiff and optimization with Optax.
Typical usage
1 | |
Classes:
| Name | Description |
|---|---|
CovarianceField |
Protocol for spatially-varying covariance fields Σ(x). |
GaussianNoise |
|
Model |
Abstract base class for psychophysical models. |
OddityTask |
Three-alternative forced-choice oddity task (MC-based only). |
OddityTaskConfig |
Configuration for :class: |
Prior |
Prior distribution over WPPM parameters |
StudentTNoise |
|
TaskLikelihood |
Abstract base class for task likelihoods. |
WPPM |
Wishart Process Psychophysical Model (WPPM). |
WPPMCovarianceField |
Covariance field for WPPM with Wishart process |
CovarianceField
¶
Bases: Protocol
Protocol for spatially-varying covariance fields Σ(x).
A covariance field maps stimulus locations x ∈ R^d to covariance matrices Σ(x) ∈ R^{dxd}.
Methods:
| Name | Description |
|---|---|
__call__ |
Evaluate field at one or more locations. Supports both single points and arbitrary batch dimensions. |
cov |
Evaluate Σ(x) at stimulus location x (deprecated, use call). |
sqrt_cov |
Evaluate U(x) such that Σ(x) = U(x) @ U(x)^T + λI. |
cov_batch |
Vectorized evaluation at multiple locations (deprecated, use call). |
Notes
This protocol enables polymorphic use of covariance fields from different sources (prior samples, fitted posteriors, custom parameterizations).
The field is callable for mathematical elegance and JAX compatibility: Sigma = field(x) # Single point or batch
GaussianNoise
¶
GaussianNoise(sigma: float = 1.0)
Model
¶
Bases: ABC
Abstract base class for psychophysical models.
Subclasses must implement: - init_params(key) --> sample initial parameters (Prior) - log_likelihood_from_data(params, data) --> compute likelihood
Methods:
| Name | Description |
|---|---|
init_params |
Sample initial parameters from prior. |
log_likelihood_from_data |
Compute log p(data | params). |
init_params
¶
Sample initial parameters from prior.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
KeyArray
|
PRNG key |
required |
Returns:
| Type | Description |
|---|---|
dict
|
Parameter PyTree |
Source code in src/psyphy/model/base.py
log_likelihood_from_data
¶
log_likelihood_from_data(params: dict, data: ResponseData) -> ndarray
Compute log p(data | params).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters |
required |
data
|
ResponseData
|
Observed trials |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Log-likelihood (scalar) |
Source code in src/psyphy/model/base.py
OddityTask
¶
OddityTask(config: OddityTaskConfig | None = None)
Bases: TaskLikelihood
Three-alternative forced-choice oddity task (MC-based only).
Implements the full 3-stimulus oddity task using Monte Carlo simulation: - Samples three internal representations per trial (z0, z1, z2) - Uses proper oddity decision rule with three pairwise distances - Suitable for complex covariance structures
Notes
MC simulation in loglik() (full 3-stimulus oddity): 1. Sample three internal representations: z_ref, z_refprime ~ N(ref, Σ_ref), z_comparison ~ N(comparison, Σ_comparison) 2. Compute average covariance: Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison 3. Compute three pairwise Mahalanobis distances: - d^2(z_ref, z_refprime) = distance between two reference samples - d^2(z_ref, z_comparison) = distance from ref to comparison - d^2(z_refprime, z_comparison) = distance from reference_prime to comparison 4. Apply oddity decision rule: delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime) 5. Logistic smoothing: P(correct) pprox logistic.cdf(delta / bandwidth) 6. Average over samples
Examples:
Methods:
| Name | Description |
|---|---|
loglik |
Compute Bernoulli log-likelihood over a batch of trials. |
predict |
Return p(correct) for a single (ref, comparison) trial via MC simulation. |
simulate |
Simulate observed binary responses for a batch of trials. |
Attributes:
| Name | Type | Description |
|---|---|---|
config |
|
Source code in src/psyphy/model/likelihood.py
loglik
¶
Compute Bernoulli log-likelihood over a batch of trials.
This is a concrete base-class method: it vmaps predict over trials
then applies the Bernoulli log-likelihood formula. Subclasses only need
to implement predict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
data
|
Any
|
Object with |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key. Passed as independent per-trial subkeys to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of Bernoulli log-likelihoods over all trials. |
Source code in src/psyphy/model/likelihood.py
predict
¶
Return p(correct) for a single (ref, comparison) trial via MC simulation.
MC controls (num_samples, bandwidth) are read from
:class:OddityTaskConfig. Pass key to control randomness; when
None, config.default_key_seed is used.
Source code in src/psyphy/model/likelihood.py
simulate
¶
simulate(params: Any, refs: ndarray, comparisons: ndarray, model: Any, *, key: Any) -> tuple[ndarray, ndarray]
Simulate observed binary responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
refs
|
(ndarray, shape(n_trials, input_dim))
|
Reference stimuli. |
required |
comparisons
|
(ndarray, shape(n_trials, input_dim))
|
Comparison stimuli. |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key (required; split internally for prediction and sampling). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
responses |
jnp.ndarray, shape (n_trials,), dtype int32
|
Simulated binary responses (1 = correct, 0 = incorrect). |
p_correct |
(ndarray, shape(n_trials))
|
Estimated P(correct) per trial used to draw the responses. |
Source code in src/psyphy/model/likelihood.py
OddityTaskConfig
¶
Configuration for :class:OddityTask.
This is the single source of truth for MC likelihood controls.
Attributes:
| Name | Type | Description |
|---|---|---|
num_samples |
int
|
Number of Monte Carlo samples per trial. |
bandwidth |
float
|
Logistic CDF smoothing bandwidth. |
default_key_seed |
int
|
Seed used when no key is provided (keeps behavior deterministic by default while allowing reproducibility control upstream). |
Prior
¶
Prior(input_dim: int = 2, basis_degree: int = 4, variance_scale: float = 0.004, decay_rate: float = 0.4, extra_embedding_dims: int = 1)
Prior distribution over WPPM parameters
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the model space (same as WPPM.input_dim) |
2
|
basis_degree
|
int | None
|
Degree of Chebyshev basis for Wishart process. If set, uses Wishart mode with W coefficients. |
None
|
variance_scale
|
float
|
Prior variance for degree-0 (constant) coefficient in Wishart mode. Controls overall scale of covariances. |
1.0
|
decay_rate
|
float
|
Geometric decay rate for prior variance over higher-degree coefficients. Prior variance for degree-d coefficient = variance_scale * (decay_rate^d). Smaller decay_rate -> stronger smoothness prior. |
0.5
|
extra_embedding_dims
|
int
|
Additional latent dimensions in U matrices beyond input dimensions. Allows richer ellipsoid shapes in Wishart mode. |
0
|
Methods:
| Name | Description |
|---|---|
log_prob |
Compute log prior density (up to a constant) |
sample_params |
Sample initial parameters from the prior. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int
|
|
decay_rate |
float
|
|
extra_embedding_dims |
int
|
|
input_dim |
int
|
|
variance_scale |
float
|
|
log_prob
¶
log_prob(params: Params) -> ndarray
Compute log prior density (up to a constant)
Gaussian prior on W with smoothness via decay_rate log p(W) = Σ_ij log N(W_ij | 0, σ_ij^2) where σ_ij^2 = prior variance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Name | Type | Description |
|---|---|---|
log_prob |
float
|
Log prior probability (up to normalizing constant) |
Source code in src/psyphy/model/prior.py
sample_params
¶
Sample initial parameters from the prior.
Returns {"W": shape (degree+1, degree+1, input_dim, embedding_dim)} for 2D, where embedding_dim = input_dim + extra_embedding_dims
Note: The 3rd dimension is input_dim (output space dimension). This matches the einsum in _compute_sqrt: U = einsum("ijde,ij->de", W, phi) where d indexes input_dim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
params |
dict
|
Parameter dictionary |
Source code in src/psyphy/model/prior.py
StudentTNoise
¶
TaskLikelihood
¶
Bases: ABC
Abstract base class for task likelihoods.
Subclasses must implement:
- predict(params, ref, comparison, model, *, key) → p(correct) for one trial
The base class provides concrete implementations of:
- loglik(params, data, model, *, key) → Bernoulli log-likelihood over a batch
- simulate(params, refs, comparisons, model, *, key) → simulated responses
The Bernoulli log-likelihood step is identical for all binary-response tasks, so it lives here rather than being re-implemented in every subclass.
Methods:
| Name | Description |
|---|---|
loglik |
Compute Bernoulli log-likelihood over a batch of trials. |
predict |
Return p(correct) for a single (ref, comparison) trial. |
simulate |
Simulate observed binary responses for a batch of trials. |
loglik
¶
Compute Bernoulli log-likelihood over a batch of trials.
This is a concrete base-class method: it vmaps predict over trials
then applies the Bernoulli log-likelihood formula. Subclasses only need
to implement predict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
data
|
Any
|
Object with |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key. Passed as independent per-trial subkeys to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of Bernoulli log-likelihoods over all trials. |
Source code in src/psyphy/model/likelihood.py
predict
¶
Return p(correct) for a single (ref, comparison) trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
ref
|
(ndarray, shape(input_dim))
|
Reference stimulus. |
required |
comparison
|
(ndarray, shape(input_dim))
|
Comparison stimulus. |
required |
model
|
Any
|
Model instance (provides covariance structure and |
required |
key
|
KeyArray
|
PRNG key for stochastic tasks. When None, the task falls back to
its |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar p(correct) in (0, 1). |
Source code in src/psyphy/model/likelihood.py
simulate
¶
simulate(params: Any, refs: ndarray, comparisons: ndarray, model: Any, *, key: Any) -> tuple[ndarray, ndarray]
Simulate observed binary responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
refs
|
(ndarray, shape(n_trials, input_dim))
|
Reference stimuli. |
required |
comparisons
|
(ndarray, shape(n_trials, input_dim))
|
Comparison stimuli. |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key (required; split internally for prediction and sampling). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
responses |
jnp.ndarray, shape (n_trials,), dtype int32
|
Simulated binary responses (1 = correct, 0 = incorrect). |
p_correct |
(ndarray, shape(n_trials))
|
Estimated P(correct) per trial used to draw the responses. |
Source code in src/psyphy/model/likelihood.py
WPPM
¶
WPPM(prior: Prior, likelihood: TaskLikelihood, noise: Any | None = None, *, input_dim: int = 2, extra_dims: int = 1, variance_scale: float = 0.004, decay_rate: float = 0.4, diag_term: float = 1e-06, **model_kwargs: Any)
Bases: Model
Wishart Process Psychophysical Model (WPPM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and comparison live in R^{input_dim}. |
2
|
prior
|
Prior
|
Prior distribution over model parameters. Controls basis_degree in WPPM (basis expansion). The WPPM delegates to prior.basis_degree to ensure consistency between parameter sampling and basis evaluation. |
required |
likelihood
|
TaskLikelihood
|
Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask) |
required |
noise
|
Any
|
Noise model describing internal representation noise (e.g., GaussianNoise). |
None
|
hyperparameters
extra_dims : int, default=0 Additional embedding dimensions for basis expansions (beyond input_dim). embedding_dim = input_dim + extra_dims. variance_scale : float, default=1.0 Global scaling factor for covariance magnitude decay_rate : float, default=1.0 Smoothness/length-scale for spatial covariance variation diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability.
model_kwargs : Any
Reserved for future keyword arguments accepted by the base Model.__init__.
Do not pass WPPM math knobs or task/likelihood knobs here.
Methods:
| Name | Description |
|---|---|
init_params |
Sample initial parameters from the prior. |
local_covariance |
Return local covariance Σ(x) at stimulus location x. |
log_likelihood_from_data |
Compute log-likelihood directly from a batched data object. |
log_posterior_from_data |
Compute log posterior from data. |
predict_prob |
Predict probability of a correct response for a single stimulus. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
Chebyshev polynomial degree for Wishart process basis expansion. |
decay_rate |
|
|
diag_term |
|
|
embedding_dim |
int
|
Dimension of the embedding space. |
extra_dims |
|
|
input_dim |
|
|
likelihood |
|
|
noise |
|
|
prior |
|
|
variance_scale |
|
Source code in src/psyphy/model/wppm.py
basis_degree
¶
basis_degree: int | None
Chebyshev polynomial degree for Wishart process basis expansion.
This property delegates to self.prior.basis_degree to ensure consistency between parameter sampling and basis evaluation.
Returns:
| Type | Description |
|---|---|
int | None
|
Degree of Chebyshev polynomial basis (0 = constant, 1 = linear, etc.) |
Notes
WPPM gets its basis_degree parameter from Prior.basis_degree.
embedding_dim
¶
embedding_dim: int
Dimension of the embedding space.
embedding_dim = input_dim + extra_dims. this represents the full perceptual space where: - First input_dim dimensions correspond to observable stimulus features - Remaining extra_dims are latent dimensions
Returns:
| Type | Description |
|---|---|
int
|
input_dim + extra_dims |
Notes
This is a computed property, not a constructor parameter.
init_params
¶
init_params(key: Array) -> Params
local_covariance
¶
local_covariance(params: Params, x: ndarray) -> ndarray
Return local covariance Σ(x) at stimulus location x.
Wishart mode (basis_degree set): Σ(x) = U(x) @ U(x)^T + diag_term * I where U(x) is rectangular (input_dim, embedding_dim) if extra_dims > 0. - Varies smoothly with x - Guaranteed positive-definite - Returns stimulus covariance directly (input_dim, input_dim)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters: - WPPM: {"W": (degree+1, ..., input_dim, embedding_dim)} |
required |
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
Σ : jnp.ndarray, shape (input_dim, input_dim)
|
Covariance matrix in stimulus space. |
Source code in src/psyphy/model/wppm.py
log_likelihood_from_data
¶
Compute log-likelihood directly from a batched data object.
Why delegate to the likelihood? - The likelihood knows the decision rule (oddity, 2AFC, ...). - The likelihood can use the model (this WPPM) to fetch discriminabilities. - The likelihood can use the noise model if it needs MC simulation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
TrialData (or any object with refs/comparisons/responses arrays)
|
Collected trial data. |
required |
key
|
Array | None
|
JAX random key for MC likelihood evaluation. When provided, a fresh
noise realization is drawn every call — required for correct stochastic
gradient estimates during optimization. When None, the task falls back
to |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed). |
Source code in src/psyphy/model/wppm.py
log_posterior_from_data
¶
Compute log posterior from data.
This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
TrialData
|
Collected trial data. |
required |
key
|
Array | None
|
JAX random key for the MC likelihood. Must be provided during
optimization so each gradient step uses a fresh noise realization.
When None, falls back to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar log posterior = loglik(params | data) + log_prior(params). |
Source code in src/psyphy/model/wppm.py
predict_prob
¶
Predict probability of a correct response for a single stimulus.
Design choice: WPPM computes discriminability & covariance; the LIKELIHOOD defines how that translates to performance. We therefore delegate to: likelihood.predict(params, stimulus, model=self, noise=self.noise)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
|
required |
stimulus
|
tuple[ndarray, ndarray]
|
(reference, comparison) pair in model space. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
p_correct |
ndarray
|
|
Source code in src/psyphy/model/wppm.py
WPPMCovarianceField
¶
WPPMCovarianceField(model, params: dict)
Covariance field for WPPM with Wishart process Encapsulates model + parameters to provide clean evaluation interface for Σ(x) and U(x).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model providing evaluation logic (local_covariance, _compute_sqrt) |
required |
params
|
dict
|
Model parameters: - MVP: {"log_diag": (input_dim,)} - Wishart: {"W": (degree+1, degree+1, input_dim, embedding_dim)} where embedding_dim = input_dim + extra_embedding_dims Note: The 3rd dimension is input_dim (output/stimulus space), not embedding_dim. This matches the einsum in _compute_sqrt where U(x) has shape (input_dim, embedding_dim). |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
model |
WPPM
|
Associated model instance |
params |
dict
|
Parameter dictionary |
Examples:
Notes
Implements the CovarianceField protocol for polymorphic use.
Construct covariance field from model and parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model providing evaluation logic |
required |
params
|
dict
|
Parameter dictionary |
required |
Methods:
| Name | Description |
|---|---|
cov |
Evaluate covariance matrix Σ(x) at stimulus location x. |
cov_batch |
Evaluate covariance at multiple locations (vectorized). |
from_params |
Create field from arbitrary parameters. |
from_posterior |
Create covariance field from fitted posterior. |
from_prior |
Sample a covariance field from the prior. |
sqrt_cov |
Evaluate U(x) such that Σ(x) = U(x) @ U(x)^T + diag_term*I. |
sqrt_cov_batch |
Vectorized evaluation of U(x) at multiple locations. |
Source code in src/psyphy/model/covariance_field.py
cov
¶
Evaluate covariance matrix Σ(x) at stimulus location x.
.. deprecated::
Use field(x) instead for unified single/batch API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
(ndarray, shape(input_dim))
|
Stimulus location in [0, 1]^d |
required |
Returns:
| Type | Description |
|---|---|
(ndarray, shape(input_dim, input_dim))
|
Covariance matrix Σ(x) in stimulus space |
Notes
With the rectangular U design, this always returns stimulus-space covariance (input_dim, input_dim), regardless of extra_dims.
Source code in src/psyphy/model/covariance_field.py
cov_batch
¶
Evaluate covariance at multiple locations (vectorized).
.. deprecated::
Use field(X) instead for unified single/batch API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(ndarray, shape(n_points, input_dim))
|
Multiple stimulus locations |
required |
Returns:
| Type | Description |
|---|---|
(ndarray, shape(n_points, dim, dim))
|
Covariance matrices at each location |
Source code in src/psyphy/model/covariance_field.py
from_params
¶
from_params(model, params: dict) -> WPPMCovarianceField
Create field from arbitrary parameters.
Useful for: - Custom initialization - Posterior samples - Intermediate optimization checkpoints - Testing
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model providing evaluation logic |
required |
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Type | Description |
|---|---|
WPPMCovarianceField
|
|
Examples:
Source code in src/psyphy/model/covariance_field.py
from_posterior
¶
from_posterior(posterior) -> WPPMCovarianceField
Create covariance field from fitted posterior.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
posterior
|
ParameterPosterior
|
Fitted posterior (e.g., from model.fit()) |
required |
Returns:
| Type | Description |
|---|---|
WPPMCovarianceField
|
Field representing posterior estimate of Σ(x) |
Notes
For MAP posteriors, uses θ_MAP. For variational posteriors, could use posterior mean or sample.
Examples:
Source code in src/psyphy/model/covariance_field.py
from_prior
¶
from_prior(model, key: KeyArray) -> WPPMCovarianceField
Sample a covariance field from the prior.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model defining prior distribution |
required |
key
|
KeyArray
|
PRNG key for sampling |
required |
Returns:
| Type | Description |
|---|---|
WPPMCovarianceField
|
Field sampled from p(Σ(x)) |
Examples:
Source code in src/psyphy/model/covariance_field.py
sqrt_cov
¶
Evaluate U(x) such that Σ(x) = U(x) @ U(x)^T + diag_term*I.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
(ndarray, shape(input_dim, embedding_dim))
|
Rectangular square root matrix U(x). embedding_dim = input_dim + extra_dims |
Notes
Only available in Wishart mode. MVP mode uses diagonal parameterization without explicit U matrices.
In the rectangular design (Hong et al.), U is (input_dim, embedding_dim).
Examples:
Source code in src/psyphy/model/covariance_field.py
sqrt_cov_batch
¶
Vectorized evaluation of U(x) at multiple locations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(ndarray, shape(n_points, input_dim))
|
Multiple stimulus locations |
required |
Returns:
| Type | Description |
|---|---|
(ndarray, shape(n_points, input_dim, embedding_dim))
|
Rectangular square root matrices at each location. embedding_dim = input_dim + extra_dims |
Raises:
| Type | Description |
|---|---|
ValueError
|
If in MVP mode. |
Notes
In the rectangular design (Hong et al.), U is (input_dim, embedding_dim).
Examples:
Source code in src/psyphy/model/covariance_field.py
Wishart Psyochophysical Process Model (WPPM)¶
wppm
¶
wppm.py
Wishart Process Psychophysical Model (WPPM)
Goals
Wishart Process Psychophysical Model (WPPM): - Expose hyperparameters needed to for example use Model config used in Hong et al.: * extra_dims: embedding size for basis expansions * variance_scale: global covariance scale * decay_rate: smoothness/length-scale for covariance field * diag_term: numerical stabilizer added to covariance diagonals
All numerics use JAX (jax.numpy as jnp) to support autodiff and optax optimizers
Classes:
| Name | Description |
|---|---|
WPPM |
Wishart Process Psychophysical Model (WPPM). |
Attributes:
| Name | Type | Description |
|---|---|---|
Params |
|
|
Stimulus |
|
WPPM
¶
WPPM(prior: Prior, likelihood: TaskLikelihood, noise: Any | None = None, *, input_dim: int = 2, extra_dims: int = 1, variance_scale: float = 0.004, decay_rate: float = 0.4, diag_term: float = 1e-06, **model_kwargs: Any)
Bases: Model
Wishart Process Psychophysical Model (WPPM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and comparison live in R^{input_dim}. |
2
|
prior
|
Prior
|
Prior distribution over model parameters. Controls basis_degree in WPPM (basis expansion). The WPPM delegates to prior.basis_degree to ensure consistency between parameter sampling and basis evaluation. |
required |
likelihood
|
TaskLikelihood
|
Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask) |
required |
noise
|
Any
|
Noise model describing internal representation noise (e.g., GaussianNoise). |
None
|
hyperparameters
extra_dims : int, default=0 Additional embedding dimensions for basis expansions (beyond input_dim). embedding_dim = input_dim + extra_dims. variance_scale : float, default=1.0 Global scaling factor for covariance magnitude decay_rate : float, default=1.0 Smoothness/length-scale for spatial covariance variation diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability.
model_kwargs : Any
Reserved for future keyword arguments accepted by the base Model.__init__.
Do not pass WPPM math knobs or task/likelihood knobs here.
Methods:
| Name | Description |
|---|---|
init_params |
Sample initial parameters from the prior. |
local_covariance |
Return local covariance Σ(x) at stimulus location x. |
log_likelihood_from_data |
Compute log-likelihood directly from a batched data object. |
log_posterior_from_data |
Compute log posterior from data. |
predict_prob |
Predict probability of a correct response for a single stimulus. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
Chebyshev polynomial degree for Wishart process basis expansion. |
decay_rate |
|
|
diag_term |
|
|
embedding_dim |
int
|
Dimension of the embedding space. |
extra_dims |
|
|
input_dim |
|
|
likelihood |
|
|
noise |
|
|
prior |
|
|
variance_scale |
|
Source code in src/psyphy/model/wppm.py
basis_degree
¶
basis_degree: int | None
Chebyshev polynomial degree for Wishart process basis expansion.
This property delegates to self.prior.basis_degree to ensure consistency between parameter sampling and basis evaluation.
Returns:
| Type | Description |
|---|---|
int | None
|
Degree of Chebyshev polynomial basis (0 = constant, 1 = linear, etc.) |
Notes
WPPM gets its basis_degree parameter from Prior.basis_degree.
embedding_dim
¶
embedding_dim: int
Dimension of the embedding space.
embedding_dim = input_dim + extra_dims. this represents the full perceptual space where: - First input_dim dimensions correspond to observable stimulus features - Remaining extra_dims are latent dimensions
Returns:
| Type | Description |
|---|---|
int
|
input_dim + extra_dims |
Notes
This is a computed property, not a constructor parameter.
init_params
¶
init_params(key: Array) -> Params
local_covariance
¶
local_covariance(params: Params, x: ndarray) -> ndarray
Return local covariance Σ(x) at stimulus location x.
Wishart mode (basis_degree set): Σ(x) = U(x) @ U(x)^T + diag_term * I where U(x) is rectangular (input_dim, embedding_dim) if extra_dims > 0. - Varies smoothly with x - Guaranteed positive-definite - Returns stimulus covariance directly (input_dim, input_dim)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters: - WPPM: {"W": (degree+1, ..., input_dim, embedding_dim)} |
required |
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
Σ : jnp.ndarray, shape (input_dim, input_dim)
|
Covariance matrix in stimulus space. |
Source code in src/psyphy/model/wppm.py
log_likelihood_from_data
¶
Compute log-likelihood directly from a batched data object.
Why delegate to the likelihood? - The likelihood knows the decision rule (oddity, 2AFC, ...). - The likelihood can use the model (this WPPM) to fetch discriminabilities. - The likelihood can use the noise model if it needs MC simulation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
TrialData (or any object with refs/comparisons/responses arrays)
|
Collected trial data. |
required |
key
|
Array | None
|
JAX random key for MC likelihood evaluation. When provided, a fresh
noise realization is drawn every call — required for correct stochastic
gradient estimates during optimization. When None, the task falls back
to |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed). |
Source code in src/psyphy/model/wppm.py
log_posterior_from_data
¶
Compute log posterior from data.
This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
TrialData
|
Collected trial data. |
required |
key
|
Array | None
|
JAX random key for the MC likelihood. Must be provided during
optimization so each gradient step uses a fresh noise realization.
When None, falls back to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar log posterior = loglik(params | data) + log_prior(params). |
Source code in src/psyphy/model/wppm.py
predict_prob
¶
Predict probability of a correct response for a single stimulus.
Design choice: WPPM computes discriminability & covariance; the LIKELIHOOD defines how that translates to performance. We therefore delegate to: likelihood.predict(params, stimulus, model=self, noise=self.noise)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
|
required |
stimulus
|
tuple[ndarray, ndarray]
|
(reference, comparison) pair in model space. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
p_correct |
ndarray
|
|
Source code in src/psyphy/model/wppm.py
Priors¶
prior
¶
prior.py
Prior distributions for WPPM parameters
Hyperparameters: * variance_scale : global scaling factor for covariance magnitude * decay_rate : smoothness controlling spatial variation * extra_embedding_dims : embedding dimension for basis expansions
Connections
- WPPM calls Prior.sample_params() to initialize model parameters
- WPPM adds Prior.log_prob(params) to task log-likelihoods to form the log posterior
- Prior will generate structured parameters for basis expansions and decay_rate-controlled smooth covariance fields
Classes:
| Name | Description |
|---|---|
Prior |
Prior distribution over WPPM parameters |
Attributes:
| Name | Type | Description |
|---|---|---|
Params |
|
Prior
¶
Prior(input_dim: int = 2, basis_degree: int = 4, variance_scale: float = 0.004, decay_rate: float = 0.4, extra_embedding_dims: int = 1)
Prior distribution over WPPM parameters
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the model space (same as WPPM.input_dim) |
2
|
basis_degree
|
int | None
|
Degree of Chebyshev basis for Wishart process. If set, uses Wishart mode with W coefficients. |
None
|
variance_scale
|
float
|
Prior variance for degree-0 (constant) coefficient in Wishart mode. Controls overall scale of covariances. |
1.0
|
decay_rate
|
float
|
Geometric decay rate for prior variance over higher-degree coefficients. Prior variance for degree-d coefficient = variance_scale * (decay_rate^d). Smaller decay_rate -> stronger smoothness prior. |
0.5
|
extra_embedding_dims
|
int
|
Additional latent dimensions in U matrices beyond input dimensions. Allows richer ellipsoid shapes in Wishart mode. |
0
|
Methods:
| Name | Description |
|---|---|
log_prob |
Compute log prior density (up to a constant) |
sample_params |
Sample initial parameters from the prior. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int
|
|
decay_rate |
float
|
|
extra_embedding_dims |
int
|
|
input_dim |
int
|
|
variance_scale |
float
|
|
log_prob
¶
log_prob(params: Params) -> ndarray
Compute log prior density (up to a constant)
Gaussian prior on W with smoothness via decay_rate log p(W) = Σ_ij log N(W_ij | 0, σ_ij^2) where σ_ij^2 = prior variance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Name | Type | Description |
|---|---|---|
log_prob |
float
|
Log prior probability (up to normalizing constant) |
Source code in src/psyphy/model/prior.py
sample_params
¶
Sample initial parameters from the prior.
Returns {"W": shape (degree+1, degree+1, input_dim, embedding_dim)} for 2D, where embedding_dim = input_dim + extra_embedding_dims
Note: The 3rd dimension is input_dim (output space dimension). This matches the einsum in _compute_sqrt: U = einsum("ijde,ij->de", W, phi) where d indexes input_dim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
params |
dict
|
Parameter dictionary |
Source code in src/psyphy/model/prior.py
Noise¶
noise
¶
Classes:
| Name | Description |
|---|---|
GaussianNoise |
|
StudentTNoise |
|
GaussianNoise
¶
GaussianNoise(sigma: float = 1.0)
Likelihood (defined by Tasks)¶
likelihood
¶
psyphy.model.likelihood
Task likelihoods for psychophysical experiments.
This module defines task-specific mappings from a model (e.g., WPPM) and stimuli to response likelihoods.
Current direction
OddityTask: the log-likelihood is computed via Monte Carlo observer
simulation of the full 3-stimulus oddity decision rule (two identical references,
one comparison).
The public API is:
-
TaskLikelihood.predict(params, stimuli, model, noise)Optional fast predictor for p(correct). For MC-only tasks this may be unimplemented. -
TaskLikelihood.loglik(params, data, model, noise, **kwargs)Compute log-likelihood of observed responses under this task.
Connections
- WPPM delegates to the task to compute likelihood.
- Noise models are passed through so likelihoods can simulate observer responses.
Classes:
| Name | Description |
|---|---|
OddityTask |
Three-alternative forced-choice oddity task (MC-based only). |
OddityTaskConfig |
Configuration for :class: |
TaskLikelihood |
Abstract base class for task likelihoods. |
OddityTask
¶
OddityTask(config: OddityTaskConfig | None = None)
Bases: TaskLikelihood
Three-alternative forced-choice oddity task (MC-based only).
Implements the full 3-stimulus oddity task using Monte Carlo simulation: - Samples three internal representations per trial (z0, z1, z2) - Uses proper oddity decision rule with three pairwise distances - Suitable for complex covariance structures
Notes
MC simulation in loglik() (full 3-stimulus oddity): 1. Sample three internal representations: z_ref, z_refprime ~ N(ref, Σ_ref), z_comparison ~ N(comparison, Σ_comparison) 2. Compute average covariance: Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison 3. Compute three pairwise Mahalanobis distances: - d^2(z_ref, z_refprime) = distance between two reference samples - d^2(z_ref, z_comparison) = distance from ref to comparison - d^2(z_refprime, z_comparison) = distance from reference_prime to comparison 4. Apply oddity decision rule: delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime) 5. Logistic smoothing: P(correct) pprox logistic.cdf(delta / bandwidth) 6. Average over samples
Examples:
Methods:
| Name | Description |
|---|---|
loglik |
Compute Bernoulli log-likelihood over a batch of trials. |
predict |
Return p(correct) for a single (ref, comparison) trial via MC simulation. |
simulate |
Simulate observed binary responses for a batch of trials. |
Attributes:
| Name | Type | Description |
|---|---|---|
config |
|
Source code in src/psyphy/model/likelihood.py
loglik
¶
Compute Bernoulli log-likelihood over a batch of trials.
This is a concrete base-class method: it vmaps predict over trials
then applies the Bernoulli log-likelihood formula. Subclasses only need
to implement predict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
data
|
Any
|
Object with |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key. Passed as independent per-trial subkeys to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of Bernoulli log-likelihoods over all trials. |
Source code in src/psyphy/model/likelihood.py
predict
¶
Return p(correct) for a single (ref, comparison) trial via MC simulation.
MC controls (num_samples, bandwidth) are read from
:class:OddityTaskConfig. Pass key to control randomness; when
None, config.default_key_seed is used.
Source code in src/psyphy/model/likelihood.py
simulate
¶
simulate(params: Any, refs: ndarray, comparisons: ndarray, model: Any, *, key: Any) -> tuple[ndarray, ndarray]
Simulate observed binary responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
refs
|
(ndarray, shape(n_trials, input_dim))
|
Reference stimuli. |
required |
comparisons
|
(ndarray, shape(n_trials, input_dim))
|
Comparison stimuli. |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key (required; split internally for prediction and sampling). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
responses |
jnp.ndarray, shape (n_trials,), dtype int32
|
Simulated binary responses (1 = correct, 0 = incorrect). |
p_correct |
(ndarray, shape(n_trials))
|
Estimated P(correct) per trial used to draw the responses. |
Source code in src/psyphy/model/likelihood.py
OddityTaskConfig
¶
Configuration for :class:OddityTask.
This is the single source of truth for MC likelihood controls.
Attributes:
| Name | Type | Description |
|---|---|---|
num_samples |
int
|
Number of Monte Carlo samples per trial. |
bandwidth |
float
|
Logistic CDF smoothing bandwidth. |
default_key_seed |
int
|
Seed used when no key is provided (keeps behavior deterministic by default while allowing reproducibility control upstream). |
TaskLikelihood
¶
Bases: ABC
Abstract base class for task likelihoods.
Subclasses must implement:
- predict(params, ref, comparison, model, *, key) → p(correct) for one trial
The base class provides concrete implementations of:
- loglik(params, data, model, *, key) → Bernoulli log-likelihood over a batch
- simulate(params, refs, comparisons, model, *, key) → simulated responses
The Bernoulli log-likelihood step is identical for all binary-response tasks, so it lives here rather than being re-implemented in every subclass.
Methods:
| Name | Description |
|---|---|
loglik |
Compute Bernoulli log-likelihood over a batch of trials. |
predict |
Return p(correct) for a single (ref, comparison) trial. |
simulate |
Simulate observed binary responses for a batch of trials. |
loglik
¶
Compute Bernoulli log-likelihood over a batch of trials.
This is a concrete base-class method: it vmaps predict over trials
then applies the Bernoulli log-likelihood formula. Subclasses only need
to implement predict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
data
|
Any
|
Object with |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key. Passed as independent per-trial subkeys to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of Bernoulli log-likelihoods over all trials. |
Source code in src/psyphy/model/likelihood.py
predict
¶
Return p(correct) for a single (ref, comparison) trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
ref
|
(ndarray, shape(input_dim))
|
Reference stimulus. |
required |
comparison
|
(ndarray, shape(input_dim))
|
Comparison stimulus. |
required |
model
|
Any
|
Model instance (provides covariance structure and |
required |
key
|
KeyArray
|
PRNG key for stochastic tasks. When None, the task falls back to
its |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar p(correct) in (0, 1). |
Source code in src/psyphy/model/likelihood.py
simulate
¶
simulate(params: Any, refs: ndarray, comparisons: ndarray, model: Any, *, key: Any) -> tuple[ndarray, ndarray]
Simulate observed binary responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
refs
|
(ndarray, shape(n_trials, input_dim))
|
Reference stimuli. |
required |
comparisons
|
(ndarray, shape(n_trials, input_dim))
|
Comparison stimuli. |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key (required; split internally for prediction and sampling). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
responses |
jnp.ndarray, shape (n_trials,), dtype int32
|
Simulated binary responses (1 = correct, 0 = incorrect). |
p_correct |
(ndarray, shape(n_trials))
|
Estimated P(correct) per trial used to draw the responses. |