Psyphy¶
psyphy
¶
psyphy¶
Psychophysical modeling and adaptive trial placement.
This package implements the Wishart Process Psychophysical Model (WPPM) with modular components for priors, task likelihoods, and noise models, which can be fitted to incoming subject data and used to adaptively select new trials to present to the subject next. This is useful for efficiently estimating psychophysical parameters (e.g. threshold contours) with minimal trials.
Workflow
Core design
- WPPM (model/wppm.py):
- Structural definition of the psychophysical model.
- Maintains parameterization of local covariance fields.
- Computes discriminability between stimuli.
-
Delegates trial likelihoods and predictions to the task.
-
Prior (model/prior.py):
- Defines the distribution over model parameters.
-
WPPM: structured prior over basis weights and decay_rate-controlled covariance fields.
-
TaskLikelihood (model/likelihood.py):
- Encodes the psychophysical decision rule.
-
WPPM: loglik and predict implemented via Monte Carlo observer simulations, using the noise model explicitly.
-
NoiseModel (model/noise.py):
- Defines the distribution of internal representation noise.
- WPPM: GaussianNoise or StudentTNoise option.
Unified import style
Top-level (core models + session): from psyphy import WPPM, Prior, OddityTask, GaussianNoise, MAPOptimizer from psyphy import ExperimentSession, ResponseData, TrialBatch
Subpackages: from psyphy.model import WPPM, Prior, OddityTask, GaussianNoise, StudentTNoise from psyphy.inference import MAPOptimizer, LangevinSampler, LaplaceApproximation from psyphy.acquisition import expected_improvement, upper_confidence_bound, mutual_information from psyphy.acquisition import optimize_acqf, optimize_acqf_discrete, optimize_acqf_random from psyphy.trial_placement import GridPlacement, SobolPlacement from psyphy.utils import grid_candidates, sobol_candidates, custom_candidates, chebyshev_basis
Data flow
- A ResponseData object (psyphy.data) contains trial stimuli and responses.
- WPPM.init_params(prior) samples parameter initialization.
- Inference engines optimize the log posterior: log_posterior = task.loglik(params, data, model=WPPM, noise=NoiseModel) + prior.log_prob(params)
- Posterior predictions (p(correct), threshold ellipses) are always obtained through WPPM delegating to TaskLikelihood.
Extensibility
- To add a new task: subclass TaskLikelihood, implement predict/loglik.
- To add a new noise model: subclass NoiseModel, implement logpdf/sample.
Classes:
| Name | Description |
|---|---|
GaussianNoise |
|
LangevinSampler |
Langevin sampler (stub). |
LaplaceApproximation |
Laplace approximation around MAP estimate. |
MAPOptimizer |
MAP (Maximum A Posteriori) optimizer. |
OddityTask |
Three-alternative forced-choice oddity task (MC-based only). |
OddityTaskConfig |
Configuration for :class: |
Prior |
Prior distribution over WPPM parameters |
ResponseData |
Python-friendly incremental trial log. |
StudentTNoise |
|
TrialBatch |
Container for a proposed batch of trials |
WPPM |
Wishart Process Psychophysical Model (WPPM). |
GaussianNoise
¶
GaussianNoise(sigma: float = 1.0)
LangevinSampler
¶
Bases: InferenceEngine
Langevin sampler (stub).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
steps
|
int
|
Number of Langevin steps. |
1000
|
step_size
|
float
|
Integration step size. |
1e-3
|
temperature
|
float
|
Noise scale (temperature). |
1.0
|
Methods:
| Name | Description |
|---|---|
fit |
Fit model parameters with Langevin dynamics (stub). |
Attributes:
| Name | Type | Description |
|---|---|---|
step_size |
|
|
steps |
|
|
temperature |
|
Source code in src/psyphy/inference/langevin.py
fit
¶
fit(model, data) -> MAPPosterior
Fit model parameters with Langevin dynamics (stub).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model instance. |
required |
data
|
ResponseData
|
Observed trials. |
required |
Source code in src/psyphy/inference/langevin.py
LaplaceApproximation
¶
Bases: InferenceEngine
Laplace approximation around MAP estimate.
Methods:
| Name | Description |
|---|---|
from_map |
Construct a Gaussian approximation centered at MAP. |
fit
¶
Fit model parameters to data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Psychophysical model to fit. |
required |
data
|
ResponseData
|
Observed trials. |
required |
Returns:
| Type | Description |
|---|---|
Posterior
|
Posterior object wrapping fitted params and model reference. |
Source code in src/psyphy/inference/base.py
from_map
¶
from_map(map_posterior: MAPPosterior) -> MAPPosterior
Return posterior approximation from MAP.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
map_posterior
|
Posterior
|
Posterior object from MAP optimization. |
required |
Returns:
| Type | Description |
|---|---|
MAPPosterior
|
Posterior distribution containing Laplace approximation. |
Source code in src/psyphy/inference/laplace.py
MAPOptimizer
¶
MAPOptimizer(steps: int = 500, learning_rate: float = 5e-05, momentum: float = 0.9, optimizer: GradientTransformation | None = None, *, track_history: bool = True, log_every: int = 1, progress_every: int = 10, show_progress: bool = False, max_grad_norm: float | None = 1.0)
Bases: InferenceEngine
MAP (Maximum A Posteriori) optimizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
steps
|
int
|
Number of optimization steps. |
500
|
optimizer
|
GradientTransformation
|
Optax optimizer to use. Default: SGD with momentum. |
None
|
Notes
- Loss function = negative log posterior.
- Gradients computed with jax.grad.
Create a MAP optimizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
steps
|
int
|
Number of optimization steps. |
500
|
optimizer
|
GradientTransformation | None
|
Optax optimizer to use. |
None
|
learning_rate
|
float
|
Learning rate for the default optimizer (SGD with momentum). |
5e-05
|
momentum
|
float
|
Momentum for the default optimizer (SGD with momentum). |
0.9
|
track_history
|
bool
|
When True, record loss history during fitting for plotting. |
True
|
log_every
|
int
|
Record every N steps (also records the last step). |
1
|
progress_every
|
int
|
Update the progress-bar loss display every N steps (and the last step) when show_progress=True. This is kept separate from log_every so you can record loss at high frequency for plotting (e.g. log_every=1) without forcing a device->host sync for the progress UI every step. |
10
|
show_progress
|
bool
|
When True, display a tqdm progress bar during fitting. This is a UI feature: if tqdm is not installed, fitting proceeds without a progress bar. |
False
|
max_grad_norm
|
float | None
|
If set, clip gradients by global norm to this value before applying optimizer updates. This stabilizes optimization when gradients blow up. |
1.0
|
Methods:
| Name | Description |
|---|---|
fit |
Fit model parameters with MAP optimization. |
get_history |
Return (steps, losses) recorded during the last fit when tracking was enabled. |
Attributes:
| Name | Type | Description |
|---|---|---|
log_every |
|
|
loss_history |
list[float]
|
|
loss_steps |
list[int]
|
|
max_grad_norm |
|
|
optimizer |
|
|
progress_every |
|
|
show_progress |
|
|
steps |
|
|
track_history |
|
Source code in src/psyphy/inference/map_optimizer.py
fit
¶
fit(model, data, init_params: dict | None = None, seed: int | None = None) -> MAPPosterior
Fit model parameters with MAP optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model instance. |
required |
data
|
ResponseData
|
Observed trials. |
required |
init_params
|
dict | None
|
Initial parameter PyTree to start optimization from. If provided, this takes precedence over the seed. |
None
|
seed
|
int | None
|
PRNG seed used to draw initial parameters from the model's prior when init_params is not provided, and as the base key for the MC likelihood random stream during optimization. If None, defaults to 0. |
None
|
Returns:
| Type | Description |
|---|---|
MAPPosterior
|
Posterior wrapper around MAP params and model. |
Source code in src/psyphy/inference/map_optimizer.py
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | |
get_history
¶
Return (steps, losses) recorded during the last fit when tracking was enabled.
OddityTask
¶
OddityTask(config: OddityTaskConfig | None = None)
Bases: TaskLikelihood
Three-alternative forced-choice oddity task (MC-based only).
Implements the full 3-stimulus oddity task using Monte Carlo simulation: - Samples three internal representations per trial (z0, z1, z2) - Uses proper oddity decision rule with three pairwise distances - Suitable for complex covariance structures
Notes
MC simulation in loglik() (full 3-stimulus oddity): 1. Sample three internal representations: z_ref, z_refprime ~ N(ref, Σ_ref), z_comparison ~ N(comparison, Σ_comparison) 2. Compute average covariance: Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison 3. Compute three pairwise Mahalanobis distances: - d^2(z_ref, z_refprime) = distance between two reference samples - d^2(z_ref, z_comparison) = distance from ref to comparison - d^2(z_refprime, z_comparison) = distance from reference_prime to comparison 4. Apply oddity decision rule: delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime) 5. Logistic smoothing: P(correct) pprox logistic.cdf(delta / bandwidth) 6. Average over samples
Examples:
Methods:
| Name | Description |
|---|---|
loglik |
Compute Bernoulli log-likelihood over a batch of trials. |
predict |
Return p(correct) for a single (ref, comparison) trial via MC simulation. |
simulate |
Simulate observed binary responses for a batch of trials. |
Attributes:
| Name | Type | Description |
|---|---|---|
config |
|
Source code in src/psyphy/model/likelihood.py
loglik
¶
Compute Bernoulli log-likelihood over a batch of trials.
This is a concrete base-class method: it vmaps predict over trials
then applies the Bernoulli log-likelihood formula. Subclasses only need
to implement predict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
data
|
Any
|
Object with |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key. Passed as independent per-trial subkeys to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of Bernoulli log-likelihoods over all trials. |
Source code in src/psyphy/model/likelihood.py
predict
¶
Return p(correct) for a single (ref, comparison) trial via MC simulation.
MC controls (num_samples, bandwidth) are read from
:class:OddityTaskConfig. Pass key to control randomness; when
None, config.default_key_seed is used.
Source code in src/psyphy/model/likelihood.py
simulate
¶
simulate(params: Any, refs: ndarray, comparisons: ndarray, model: Any, *, key: Any) -> tuple[ndarray, ndarray]
Simulate observed binary responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
Any
|
Model parameters. |
required |
refs
|
(ndarray, shape(n_trials, input_dim))
|
Reference stimuli. |
required |
comparisons
|
(ndarray, shape(n_trials, input_dim))
|
Comparison stimuli. |
required |
model
|
Any
|
Model instance. |
required |
key
|
KeyArray
|
PRNG key (required; split internally for prediction and sampling). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
responses |
jnp.ndarray, shape (n_trials,), dtype int32
|
Simulated binary responses (1 = correct, 0 = incorrect). |
p_correct |
(ndarray, shape(n_trials))
|
Estimated P(correct) per trial used to draw the responses. |
Source code in src/psyphy/model/likelihood.py
OddityTaskConfig
¶
Configuration for :class:OddityTask.
This is the single source of truth for MC likelihood controls.
Attributes:
| Name | Type | Description |
|---|---|---|
num_samples |
int
|
Number of Monte Carlo samples per trial. |
bandwidth |
float
|
Logistic CDF smoothing bandwidth. |
default_key_seed |
int
|
Seed used when no key is provided (keeps behavior deterministic by default while allowing reproducibility control upstream). |
Prior
¶
Prior(input_dim: int = 2, basis_degree: int = 4, variance_scale: float = 0.004, decay_rate: float = 0.4, extra_embedding_dims: int = 1)
Prior distribution over WPPM parameters
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the model space (same as WPPM.input_dim) |
2
|
basis_degree
|
int | None
|
Degree of Chebyshev basis for Wishart process. If set, uses Wishart mode with W coefficients. |
None
|
variance_scale
|
float
|
Prior variance for degree-0 (constant) coefficient in Wishart mode. Controls overall scale of covariances. |
1.0
|
decay_rate
|
float
|
Geometric decay rate for prior variance over higher-degree coefficients. Prior variance for degree-d coefficient = variance_scale * (decay_rate^d). Smaller decay_rate -> stronger smoothness prior. |
0.5
|
extra_embedding_dims
|
int
|
Additional latent dimensions in U matrices beyond input dimensions. Allows richer ellipsoid shapes in Wishart mode. |
0
|
Methods:
| Name | Description |
|---|---|
log_prob |
Compute log prior density (up to a constant) |
sample_params |
Sample initial parameters from the prior. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int
|
|
decay_rate |
float
|
|
extra_embedding_dims |
int
|
|
input_dim |
int
|
|
variance_scale |
float
|
|
log_prob
¶
log_prob(params: Params) -> ndarray
Compute log prior density (up to a constant)
Gaussian prior on W with smoothness via decay_rate log p(W) = Σ_ij log N(W_ij | 0, σ_ij^2) where σ_ij^2 = prior variance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Name | Type | Description |
|---|---|---|
log_prob |
float
|
Log prior probability (up to normalizing constant) |
Source code in src/psyphy/model/prior.py
sample_params
¶
Sample initial parameters from the prior.
Returns {"W": shape (degree+1, degree+1, input_dim, embedding_dim)} for 2D, where embedding_dim = input_dim + extra_embedding_dims
Note: The 3rd dimension is input_dim (output space dimension). This matches the einsum in _compute_sqrt: U = einsum("ijde,ij->de", W, phi) where d indexes input_dim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
params |
dict
|
Parameter dictionary |
Source code in src/psyphy/model/prior.py
ResponseData
¶
Python-friendly incremental trial log.
This container is convenient for adaptive trial placement and I/O (e.g., CSV), but it is not a compute-efficient representation for JAX.
Use :class:TrialData for model fitting and likelihood evaluation.
Methods:
| Name | Description |
|---|---|
add_batch |
Append responses for a batch of trials. |
add_trial |
append a single trial. |
copy |
Create a deep copy of this dataset. |
from_arrays |
Construct ResponseData from arrays. |
from_trial_data |
Build a ResponseData log from a :class: |
merge |
Merge another dataset into this one (in-place). |
tail |
Return last n trials as a new ResponseData. |
to_numpy |
Return refs, comparisons, responses as NumPy arrays. |
to_trial_data |
Convert this log into the canonical JAX batch (:class: |
Attributes:
| Name | Type | Description |
|---|---|---|
comparisons |
list[Any]
|
|
refs |
list[Any]
|
|
responses |
list[int]
|
|
trials |
list[tuple[Any, Any, int]]
|
Return list of (ref, comparison, response) tuples. |
Source code in src/psyphy/data/dataset.py
trials
¶
add_batch
¶
add_batch(responses: list[int], trial_batch: TrialBatch) -> None
Append responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
responses
|
List[int]
|
Responses corresponding to each (ref, comparison) in the trial batch. |
required |
trial_batch
|
TrialBatch
|
The batch of proposed trials. |
required |
Source code in src/psyphy/data/dataset.py
add_trial
¶
append a single trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ref
|
Any
|
Reference stimulus (numpy array, list, etc.) |
required |
comparison
|
Any
|
Probe stimulus |
required |
resp
|
int
|
Subject response (binary or categorical) |
required |
Source code in src/psyphy/data/dataset.py
copy
¶
copy() -> ResponseData
Create a deep copy of this dataset.
Returns:
| Type | Description |
|---|---|
ResponseData
|
New dataset with copied data |
Source code in src/psyphy/data/dataset.py
from_arrays
¶
from_arrays(X: ndarray | ndarray, y: ndarray | ndarray, *, comparisons: ndarray | ndarray | None = None) -> ResponseData
Construct ResponseData from arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(array, shape(n_trials, 2, input_dim) or (n_trials, input_dim))
|
Stimuli. If 3D, second axis is [reference, comparison]. If 2D, comparisons must be provided separately. |
required |
y
|
(array, shape(n_trials))
|
Responses |
required |
comparisons
|
(array, shape(n_trials, input_dim))
|
Probe stimuli. Only needed if X is 2D. |
None
|
Returns:
| Type | Description |
|---|---|
ResponseData
|
Data container |
Examples:
Source code in src/psyphy/data/dataset.py
from_trial_data
¶
from_trial_data(data: TrialData) -> ResponseData
Build a ResponseData log from a :class:TrialData batch.
Source code in src/psyphy/data/dataset.py
merge
¶
merge(other: ResponseData) -> None
Merge another dataset into this one (in-place).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
ResponseData
|
Dataset to merge |
required |
Source code in src/psyphy/data/dataset.py
tail
¶
tail(n: int) -> ResponseData
Return last n trials as a new ResponseData.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n
|
int
|
Number of trials to keep |
required |
Returns:
| Type | Description |
|---|---|
ResponseData
|
New dataset with last n trials |
Source code in src/psyphy/data/dataset.py
to_numpy
¶
to_numpy() -> tuple[ndarray, ndarray, ndarray]
Return refs, comparisons, responses as NumPy arrays.
StudentTNoise
¶
TrialBatch
¶
Container for a proposed batch of trials
Attributes:
| Name | Type | Description |
|---|---|---|
stimuli |
List[Tuple[Any, Any]]
|
Each trial is a (reference, comparison) tuple. |
Methods:
| Name | Description |
|---|---|
from_stimuli |
Construct a TrialBatch from a list of stimuli (ref, comparison) pairs. |
Source code in src/psyphy/data/dataset.py
WPPM
¶
WPPM(prior: Prior, likelihood: TaskLikelihood, noise: Any | None = None, *, input_dim: int = 2, extra_dims: int = 1, variance_scale: float = 0.004, decay_rate: float = 0.4, diag_term: float = 1e-06, **model_kwargs: Any)
Bases: Model
Wishart Process Psychophysical Model (WPPM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and comparison live in R^{input_dim}. |
2
|
prior
|
Prior
|
Prior distribution over model parameters. Controls basis_degree in WPPM (basis expansion). The WPPM delegates to prior.basis_degree to ensure consistency between parameter sampling and basis evaluation. |
required |
likelihood
|
TaskLikelihood
|
Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask) |
required |
noise
|
Any
|
Noise model describing internal representation noise (e.g., GaussianNoise). |
None
|
hyperparameters
extra_dims : int, default=0 Additional embedding dimensions for basis expansions (beyond input_dim). embedding_dim = input_dim + extra_dims. variance_scale : float, default=1.0 Global scaling factor for covariance magnitude decay_rate : float, default=1.0 Smoothness/length-scale for spatial covariance variation diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability.
model_kwargs : Any
Reserved for future keyword arguments accepted by the base Model.__init__.
Do not pass WPPM math knobs or task/likelihood knobs here.
Methods:
| Name | Description |
|---|---|
init_params |
Sample initial parameters from the prior. |
local_covariance |
Return local covariance Σ(x) at stimulus location x. |
log_likelihood_from_data |
Compute log-likelihood directly from a batched data object. |
log_posterior_from_data |
Compute log posterior from data. |
predict_prob |
Predict probability of a correct response for a single stimulus. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
Chebyshev polynomial degree for Wishart process basis expansion. |
decay_rate |
|
|
diag_term |
|
|
embedding_dim |
int
|
Dimension of the embedding space. |
extra_dims |
|
|
input_dim |
|
|
likelihood |
|
|
noise |
|
|
prior |
|
|
variance_scale |
|
Source code in src/psyphy/model/wppm.py
basis_degree
¶
basis_degree: int | None
Chebyshev polynomial degree for Wishart process basis expansion.
This property delegates to self.prior.basis_degree to ensure consistency between parameter sampling and basis evaluation.
Returns:
| Type | Description |
|---|---|
int | None
|
Degree of Chebyshev polynomial basis (0 = constant, 1 = linear, etc.) |
Notes
WPPM gets its basis_degree parameter from Prior.basis_degree.
embedding_dim
¶
embedding_dim: int
Dimension of the embedding space.
embedding_dim = input_dim + extra_dims. this represents the full perceptual space where: - First input_dim dimensions correspond to observable stimulus features - Remaining extra_dims are latent dimensions
Returns:
| Type | Description |
|---|---|
int
|
input_dim + extra_dims |
Notes
This is a computed property, not a constructor parameter.
init_params
¶
init_params(key: Array) -> Params
local_covariance
¶
local_covariance(params: Params, x: ndarray) -> ndarray
Return local covariance Σ(x) at stimulus location x.
Wishart mode (basis_degree set): Σ(x) = U(x) @ U(x)^T + diag_term * I where U(x) is rectangular (input_dim, embedding_dim) if extra_dims > 0. - Varies smoothly with x - Guaranteed positive-definite - Returns stimulus covariance directly (input_dim, input_dim)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters: - WPPM: {"W": (degree+1, ..., input_dim, embedding_dim)} |
required |
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
Σ : jnp.ndarray, shape (input_dim, input_dim)
|
Covariance matrix in stimulus space. |
Source code in src/psyphy/model/wppm.py
log_likelihood_from_data
¶
Compute log-likelihood directly from a batched data object.
Why delegate to the likelihood? - The likelihood knows the decision rule (oddity, 2AFC, ...). - The likelihood can use the model (this WPPM) to fetch discriminabilities. - The likelihood can use the noise model if it needs MC simulation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
TrialData (or any object with refs/comparisons/responses arrays)
|
Collected trial data. |
required |
key
|
Array | None
|
JAX random key for MC likelihood evaluation. When provided, a fresh
noise realization is drawn every call — required for correct stochastic
gradient estimates during optimization. When None, the task falls back
to |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed). |
Source code in src/psyphy/model/wppm.py
log_posterior_from_data
¶
Compute log posterior from data.
This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
TrialData
|
Collected trial data. |
required |
key
|
Array | None
|
JAX random key for the MC likelihood. Must be provided during
optimization so each gradient step uses a fresh noise realization.
When None, falls back to |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar log posterior = loglik(params | data) + log_prior(params). |
Source code in src/psyphy/model/wppm.py
predict_prob
¶
Predict probability of a correct response for a single stimulus.
Design choice: WPPM computes discriminability & covariance; the LIKELIHOOD defines how that translates to performance. We therefore delegate to: likelihood.predict(params, stimulus, model=self, noise=self.noise)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
|
required |
stimulus
|
tuple[ndarray, ndarray]
|
(reference, comparison) pair in model space. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
p_correct |
ndarray
|
|