Model¶
Package¶
model
¶
psyphy.model¶
Model-layer API: everything model-related in one place.
Includes
- WPPM (core model)
- Priors (Prior)
- Tasks (TaskLikelihood base, OddityTask)
- Noise models (GaussianNoise, StudentTNoise)
All functions/classes use JAX arrays (jax.numpy as jnp) for autodiff and optimization with Optax.
Typical usage
1 | |
Classes:
| Name | Description |
|---|---|
GaussianNoise |
|
Model |
Abstract base class for psychophysical models. |
OddityTask |
Three-alternative forced-choice oddity task (MC-based only). |
OnlineConfig |
Configuration for online learning and memory management. |
Prior |
Prior distribution over WPPM parameters |
StudentTNoise |
|
TaskLikelihood |
Abstract base class for task likelihoods |
WPPM |
Wishart Process Psychophysical Model (WPPM). |
GaussianNoise
¶
GaussianNoise(sigma: float = 1.0)
Model
¶
Model(*, online_config: OnlineConfig | None = None)
Bases: ABC
Abstract base class for psychophysical models.
Provides API that mimics BoTorch style: - fit(X, y) --> train model - posterior(X) --> get predictions - condition_on_observations(X, y) --> online updates
Subclasses must implement: - init_params(key) --> sample initial parameters - log_likelihood_from_data(params, data) --> compute likelihood
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
online_config
|
OnlineConfig | None
|
Configuration for online learning. If None, uses default (unbounded memory). |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
_posterior |
ParameterPosterior | None
|
Cached parameter posterior from last fit |
_inference_engine |
InferenceEngine | None
|
Cached inference engine for warm-start refitting |
_data_buffer |
ResponseData | None
|
Data buffer managed according to online_config |
_n_updates |
int
|
Number of condition_on_observations calls |
online_config |
OnlineConfig
|
Online learning configuration |
Initialize model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
online_config
|
OnlineConfig | None
|
Online learning configuration. If None, uses default settings. |
None
|
Methods:
| Name | Description |
|---|---|
condition_on_observations |
Update model with new observations (online learning). |
fit |
Fit model to data. |
init_params |
Sample initial parameters from prior. |
log_likelihood_from_data |
Compute log p(data | params). |
posterior |
Return posterior distribution. |
predict_with_params |
Evaluate model at specific parameter values (no marginalization). |
Source code in src/psyphy/model/base.py
condition_on_observations
¶
condition_on_observations(X: ndarray, y: ndarray) -> Model
Update model with new observations (online learning).
Behavior depends on self.online_config.strategy: - "full": Accumulate all data, refit periodically - "sliding_window": Keep only recent window_size trials - "reservoir": Random sampling of window_size trials - "none": Refit from scratch (no caching)
Returns a NEW model instance (immutable update).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
New stimuli |
required |
y
|
ndarray
|
New responses |
required |
Returns:
| Type | Description |
|---|---|
Model
|
Updated model (new instance) |
Examples:
Source code in src/psyphy/model/base.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 | |
fit
¶
fit(
X: ndarray,
y: ndarray,
*,
inference: InferenceEngine | str = "laplace",
inference_config: dict | None = None,
) -> Model
Fit model to data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Stimuli, shape (n_trials, 2, input_dim) for (ref, probe) pairs or (n_trials, input_dim) for references only |
required |
y
|
ndarray
|
Responses, shape (n_trials,) |
required |
inference
|
InferenceEngine | str
|
Inference engine or string key ("map", "laplace", "langevin") |
"laplace"
|
inference_config
|
dict | None
|
Hyperparameters for string-based inference. Examples: {"steps": 500, "lr": 1e-3} for MAP |
None
|
Returns:
| Type | Description |
|---|---|
Model
|
Self for method chaining |
Examples:
Source code in src/psyphy/model/base.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | |
init_params
¶
Sample initial parameters from prior.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
KeyArray
|
PRNG key |
required |
Returns:
| Type | Description |
|---|---|
dict
|
Parameter PyTree |
Source code in src/psyphy/model/base.py
log_likelihood_from_data
¶
log_likelihood_from_data(
params: dict, data: ResponseData
) -> ndarray
Compute log p(data | params).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters |
required |
data
|
ResponseData
|
Observed trials |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Log-likelihood (scalar) |
Source code in src/psyphy/model/base.py
posterior
¶
posterior(
X: ndarray | None = None,
*,
probes: ndarray | None = None,
kind: str = "predictive",
) -> PredictivePosterior | ParameterPosterior
Return posterior distribution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray | None
|
Test stimuli (references), shape (n_test, input_dim). Required for predictive posteriors, optional for parameter posteriors. |
None
|
probes
|
ndarray | None
|
Test probes, shape (n_test, input_dim). Required for predictive posteriors. |
None
|
kind
|
('predictive', 'parameter')
|
Type of posterior to return: - "predictive": PredictivePosterior over f(X*) [for acquisitions] - "parameter": ParameterPosterior over θ [for diagnostics] |
"predictive"
|
Returns:
| Type | Description |
|---|---|
PredictivePosterior | ParameterPosterior
|
Posterior distribution |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If model has not been fit yet |
Examples:
Source code in src/psyphy/model/base.py
predict_with_params
¶
Evaluate model at specific parameter values (no marginalization).
This is useful for: - Threshold uncertainty estimation (evaluate at sampled parameters) - Parameter sensitivity analysis - Debugging and diagnostics
NOT for making predictions (use .posterior() instead, which marginalizes over parameter uncertainty).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(ndarray, shape(n_test, input_dim))
|
Test stimuli (references) |
required |
probes
|
(ndarray, shape(n_test, input_dim))
|
Probe stimuli (for discrimination tasks) |
required |
params
|
dict[str, ndarray]
|
Specific parameter values to evaluate at. Keys and shapes depend on the model (e.g., WPPM has "W", "noise_scale", etc.) |
required |
Returns:
| Name | Type | Description |
|---|---|---|
predictions |
(ndarray, shape(n_test))
|
Predicted probabilities at each test point, given these parameters |
Examples:
Notes
This bypasses the posterior marginalization. For acquisition functions, always use .posterior() which properly accounts for parameter uncertainty.
Source code in src/psyphy/model/base.py
OddityTask
¶
OddityTask(config: OddityTaskConfig | None = None)
Bases: TaskLikelihood
Three-alternative forced-choice oddity task (MC-based only).
Implements the full 3-stimulus oddity task using Monte Carlo simulation: - Samples three internal representations per trial (z0, z1, z2) - Uses proper oddity decision rule with three pairwise distances - Suitable for complex covariance structures
Notes
MC simulation in loglik() (full 3-stimulus oddity): 1. Sample three internal representations: z_ref, z_refprime ~ N(ref, Σ_ref), z_comparison ~ N(comparison, Σ_comparison) 2. Compute average covariance: Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison 3. Compute three pairwise Mahalanobis distances: - d^2(z_ref, z_refprime) = distance between two reference samples - d^2(z_ref, z_comparison) = distance from ref to comparison - d^2(z_refprime, z_comparison) = distance from reference_prime to comparison 4. Apply oddity decision rule: delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime) 5. Logistic smoothing: P(correct) pprox logistic.cdf(delta / bandwidth) 6. Average over samples
Examples:
Methods:
| Name | Description | ||
|---|---|---|---|
loglik |
|
||
predict |
Predict p(correct) for a single (ref, comparison) stimulus. |
Attributes:
| Name | Type | Description |
|---|---|---|
config |
|
Source code in src/psyphy/model/task.py
loglik
¶
1 2 3 4 5 6 7 8 9 10 11 | |
1Parameters
1 | |
1 2 3 4 5 6 7 8 9 10 11 | |
1Returns
1 | |
1 2 3 | |
1Raises
1 | |
1 2 3 4 | |
1Notes
1 | |
1 2 3 4 5 6 | |
1Notes
1 | |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | |
- Can be JIT-compiled for additional speed (future optimization)
1Examples
1 | |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | |
Source code in src/psyphy/model/task.py
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 | |
predict
¶
Predict p(correct) for a single (ref, comparison) stimulus.
Even though OddityTask is MC-only, we still implement predict.
Reason: large parts of the library (posterior predictive, acquisition
functions, diagnostics, etc.) need a forward model that returns
p(correct) at candidate stimuli. Historically this used an analytical
approximation, but in MC-only mode we compute it via simulation.
Notes
- This method is intentionally lightweight: it performs the same
single-trial Monte Carlo simulation used by
loglik. - If you need to control MC fidelity/smoothing, setOddityTaskConfig(num_samples=..., bandwidth=...)when you construct the task. - If you need reproducible randomness, passkey=...tologlik.
Source code in src/psyphy/model/task.py
OnlineConfig
¶
OnlineConfig(
strategy: Literal[
"full", "sliding_window", "reservoir", "none"
] = "full",
window_size: int | None = None,
refit_interval: int = 1,
warm_start: bool = True,
)
Configuration for online learning and memory management.
Attributes:
| Name | Type | Description |
|---|---|---|
strategy |
{'full', 'sliding_window', 'reservoir', 'none'}
|
Data retention strategy: - "full": Keep all data (unbounded memory) - "sliding_window": Keep only last N trials (FIFO) - "reservoir": Reservoir sampling for uniform coverage - "none": No caching, refit from scratch each time |
window_size |
int | None
|
Maximum number of trials to retain (for sliding_window/reservoir). Required for sliding_window and reservoir strategies. |
refit_interval |
int
|
Refit model every N updates (1=always, 10=batch every 10 trials). Trades off accuracy vs. computational cost. |
warm_start |
bool
|
If True, initialize refitting from cached parameters. Speeds up convergence for small updates. |
Examples:
Prior
¶
Prior(
input_dim: int,
basis_degree: int | None = None,
variance_scale: float = 1.0,
decay_rate: float = 0.5,
extra_embedding_dims: int = 0,
)
Prior distribution over WPPM parameters
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the model space (same as WPPM.input_dim) |
required |
basis_degree
|
int | None
|
Degree of Chebyshev basis for Wishart process. If set, uses Wishart mode with W coefficients. |
None
|
variance_scale
|
float
|
Prior variance for degree-0 (constant) coefficient in Wishart mode. Controls overall scale of covariances. |
1.0
|
decay_rate
|
float
|
Geometric decay rate for prior variance over higher-degree coefficients. Prior variance for degree-d coefficient = variance_scale * (decay_rate^d). Smaller decay_rate -> stronger smoothness prior. |
0.5
|
extra_embedding_dims
|
int
|
Additional latent dimensions in U matrices beyond input dimensions. Allows richer ellipsoid shapes in Wishart mode. |
0
|
Methods:
| Name | Description |
|---|---|
log_prob |
Compute log prior density (up to a constant) |
sample_params |
Sample initial parameters from the prior. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
|
decay_rate |
float
|
|
extra_embedding_dims |
int
|
|
input_dim |
int
|
|
variance_scale |
float
|
|
log_prob
¶
log_prob(params: Params) -> ndarray
Compute log prior density (up to a constant)
Gaussian prior on W with smoothness via decay_rate log p(W) = Σ_ij log N(W_ij | 0, σ_ij^2) where σ_ij^2 = prior variance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Name | Type | Description |
|---|---|---|
log_prob |
float
|
Log prior probability (up to normalizing constant) |
Source code in src/psyphy/model/prior.py
sample_params
¶
Sample initial parameters from the prior.
Returns {"W": shape (degree+1, degree+1, input_dim, embedding_dim)} for 2D, where embedding_dim = input_dim + extra_embedding_dims
Note: The 3rd dimension is input_dim (output space dimension). This matches the einsum in _compute_sqrt: U = einsum("ijde,ij->de", W, phi) where d indexes input_dim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
params |
dict
|
Parameter dictionary |
Source code in src/psyphy/model/prior.py
StudentTNoise
¶
TaskLikelihood
¶
Bases: ABC
Abstract base class for task likelihoods
Methods:
| Name | Description |
|---|---|
loglik |
Compute log-likelihood of observed responses under this task. |
predict |
Predict probability of correct response for a stimulus. |
loglik
¶
Compute log-likelihood of observed responses under this task.
Why **kwargs?
- Different tasks may need different optional runtime controls.
- MC-based tasks may need parameters such as a PRNG key.
In particular, :class:OddityTask takes Monte Carlo controls
(num_samples and bandwidth) exclusively from
:class:OddityTaskConfig to avoid silent mismatch bugs.
Notes
- Task implementations should document which kwargs they accept.
- Callers should not assume arbitrary kwargs are supported.
Source code in src/psyphy/model/task.py
predict
¶
WPPM
¶
WPPM(
input_dim: int,
prior: Prior,
task: TaskLikelihood,
noise: Any | None = None,
*,
online_config: OnlineConfig | None = None,
extra_dims: int = 0,
variance_scale: float = 1.0,
decay_rate: float = 1.0,
diag_term: float = 1e-06,
**model_kwargs: Any,
)
Bases: Model
Wishart Process Psychophysical Model (WPPM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and probe live in R^{input_dim}. |
required |
prior
|
Prior
|
Prior distribution over model parameters. Controls basis_degree in WPPM (basis expansion). The WPPM delegates to prior.basis_degree to ensure consistency between parameter sampling and basis evaluation. |
required |
task
|
TaskLikelihood
|
Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask) |
required |
noise
|
Any
|
Noise model describing internal representation noise (e.g., GaussianNoise). |
None
|
Forward-compatible hyperparameters
extra_dims : int, default=0
Additional embedding dimensions for basis expansions (beyond input_dim).
embedding_dim = input_dim + extra_dims.
variance_scale : float, default=1.0
Global scaling factor for covariance magnitude
decay_rate : float, default=1.0
Smoothness/length-scale for spatial covariance variation
diag_term : float, default=1e-6
Small positive value added to the covariance diagonal for numerical stability.
online_config : OnlineConfig | None, optional (keyword-only)
Base-model lifecycle / online-learning policy. This is the supported way
to configure buffering and refit scheduling via Model.condition_on_observations.
model_kwargs : Any
Reserved for future keyword arguments accepted by the base Model.__init__.
Do not pass WPPM math knobs or task/likelihood knobs here.
Methods:
| Name | Description |
|---|---|
condition_on_observations |
Update model with new observations (online learning). |
discriminability |
Compute scalar discriminability d >= 0 for a (reference, probe) pair |
fit |
Fit model to data. |
init_params |
Sample initial parameters from the prior. |
local_covariance |
Return local covariance Σ(x) at stimulus location x. |
log_likelihood |
Compute the log-likelihood for arrays of trials. |
log_likelihood_from_data |
Compute log-likelihood directly from a ResponseData object. |
log_posterior_from_data |
Compute log posterior from data. |
posterior |
Return posterior distribution. |
predict_prob |
Predict probability of a correct response for a single stimulus. |
predict_with_params |
Evaluate model at specific parameter values (no marginalization). |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
Chebyshev polynomial degree for Wishart process basis expansion. |
decay_rate |
|
|
diag_term |
|
|
embedding_dim |
int
|
Dimension of the embedding space (perceptual space). |
extra_dims |
|
|
input_dim |
|
|
noise |
|
|
online_config |
|
|
prior |
|
|
task |
|
|
variance_scale |
|
Source code in src/psyphy/model/wppm.py
basis_degree
¶
basis_degree: int | None
Chebyshev polynomial degree for Wishart process basis expansion.
This property delegates to self.prior.basis_degree to ensure consistency between parameter sampling and basis evaluation.
Returns:
| Type | Description |
|---|---|
int | None
|
Degree of Chebyshev polynomial basis (0 = constant, 1 = linear, etc.) |
Notes
WPPM gets its basis_degree parameter from Prior.basis_degree.
embedding_dim
¶
embedding_dim: int
Dimension of the embedding space (perceptual space).
embedding_dim = input_dim + extra_dims. this represents the full perceptual space where: - First input_dim dimensions correspond to observable stimulus features - Remaining extra_dims are latent dimensions
Returns:
| Type | Description |
|---|---|
int
|
input_dim + extra_dims |
Notes
This is a computed property, not a constructor parameter.
condition_on_observations
¶
condition_on_observations(X: ndarray, y: ndarray) -> Model
Update model with new observations (online learning).
Behavior depends on self.online_config.strategy: - "full": Accumulate all data, refit periodically - "sliding_window": Keep only recent window_size trials - "reservoir": Random sampling of window_size trials - "none": Refit from scratch (no caching)
Returns a NEW model instance (immutable update).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
New stimuli |
required |
y
|
ndarray
|
New responses |
required |
Returns:
| Type | Description |
|---|---|
Model
|
Updated model (new instance) |
Examples:
Source code in src/psyphy/model/base.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 | |
discriminability
¶
Compute scalar discriminability d >= 0 for a (reference, probe) pair
WPPM (rectangular U design) if extra_dims > 0: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) where Σ(ref) is directly computed in stimulus space (input_dim, input_dim) via U(x) @ U(x)^T with U rectangular.
The discrimination task only depends on observable stimulus dimensions. The rectangular U design means local_covariance() already returns the stimulus covariance - no block extraction needed.
WPPM: d is implicit via Monte Carlo simulation of internal noisy responses under the task's decision rule (no closed form). In that case, tasks will directly implement predict/loglik with MC, and this method may be used only for diagnostics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
stimulus
|
tuple
|
(reference, probe) arrays of shape (input_dim,). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
d |
ndarray
|
Nonnegative scalar discriminability. |
Source code in src/psyphy/model/wppm.py
fit
¶
fit(
X: ndarray,
y: ndarray,
*,
inference: InferenceEngine | str = "laplace",
inference_config: dict | None = None,
) -> Model
Fit model to data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Stimuli, shape (n_trials, 2, input_dim) for (ref, probe) pairs or (n_trials, input_dim) for references only |
required |
y
|
ndarray
|
Responses, shape (n_trials,) |
required |
inference
|
InferenceEngine | str
|
Inference engine or string key ("map", "laplace", "langevin") |
"laplace"
|
inference_config
|
dict | None
|
Hyperparameters for string-based inference. Examples: {"steps": 500, "lr": 1e-3} for MAP |
None
|
Returns:
| Type | Description |
|---|---|
Model
|
Self for method chaining |
Examples:
Source code in src/psyphy/model/base.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | |
init_params
¶
init_params(key: Array) -> Params
local_covariance
¶
local_covariance(params: Params, x: ndarray) -> ndarray
Return local covariance Σ(x) at stimulus location x.
Wishart mode (basis_degree set): Σ(x) = U(x) @ U(x)^T + diag_term * I where U(x) is rectangular (input_dim, embedding_dim) if extra_dims > 0. - Varies smoothly with x - Guaranteed positive-definite - Returns stimulus covariance directly (input_dim, input_dim)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters: - WPPM: {"W": (degree+1, ..., input_dim, embedding_dim)} |
required |
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
Σ : jnp.ndarray, shape (input_dim, input_dim)
|
Covariance matrix in stimulus space. |
Source code in src/psyphy/model/wppm.py
log_likelihood
¶
log_likelihood(
params: Params,
refs: ndarray,
probes: ndarray,
responses: ndarray,
) -> ndarray
Compute the log-likelihood for arrays of trials.
IMPORTANT: We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV) or MC likelihood logic in multiple places.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
refs
|
(ndarray, shape(N, input_dim))
|
|
required |
probes
|
(ndarray, shape(N, input_dim))
|
|
required |
responses
|
(ndarray, shape(N))
|
Typically 0/1; task may support richer encodings. |
required |
Returns:
| Name | Type | Description | ||
|---|---|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed)
|
Source code in src/psyphy/model/wppm.py
log_likelihood_from_data
¶
Compute log-likelihood directly from a ResponseData object.
Why delegate to the task? - The task knows the decision rule (oddity, 2AFC, ...). - The task can use the model (this WPPM) to fetch discriminabilities. - The task can use the noise model if it needs MC simulation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
ResponseData
|
Collected trial data. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed). |
Source code in src/psyphy/model/wppm.py
log_posterior_from_data
¶
Compute log posterior from data.
This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar log posterior = loglik(params | data) + log_prior(params). |
Source code in src/psyphy/model/wppm.py
posterior
¶
posterior(
X: ndarray | None = None,
*,
probes: ndarray | None = None,
kind: str = "predictive",
) -> PredictivePosterior | ParameterPosterior
Return posterior distribution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray | None
|
Test stimuli (references), shape (n_test, input_dim). Required for predictive posteriors, optional for parameter posteriors. |
None
|
probes
|
ndarray | None
|
Test probes, shape (n_test, input_dim). Required for predictive posteriors. |
None
|
kind
|
('predictive', 'parameter')
|
Type of posterior to return: - "predictive": PredictivePosterior over f(X*) [for acquisitions] - "parameter": ParameterPosterior over θ [for diagnostics] |
"predictive"
|
Returns:
| Type | Description |
|---|---|
PredictivePosterior | ParameterPosterior
|
Posterior distribution |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If model has not been fit yet |
Examples:
Source code in src/psyphy/model/base.py
predict_prob
¶
Predict probability of a correct response for a single stimulus.
Design choice: WPPM computes discriminability & covariance; the TASK defines how that translates to performance. We therefore delegate to: task.predict(params, stimulus, model=self, noise=self.noise)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
|
required |
stimulus
|
(reference, probe)
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
p_correct |
ndarray
|
|
Source code in src/psyphy/model/wppm.py
predict_with_params
¶
Evaluate model at specific parameter values (no marginalization).
This is useful for: - Threshold uncertainty estimation (evaluate at sampled parameters) - Parameter sensitivity analysis - Debugging and diagnostics
NOT for making predictions (use .posterior() instead, which marginalizes over parameter uncertainty).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(ndarray, shape(n_test, input_dim))
|
Test stimuli (references) |
required |
probes
|
(ndarray, shape(n_test, input_dim))
|
Probe stimuli (for discrimination tasks) |
required |
params
|
dict[str, ndarray]
|
Specific parameter values to evaluate at. Keys and shapes depend on the model (e.g., WPPM has "W", "noise_scale", etc.) |
required |
Returns:
| Name | Type | Description |
|---|---|---|
predictions |
(ndarray, shape(n_test))
|
Predicted probabilities at each test point, given these parameters |
Examples:
Notes
This bypasses the posterior marginalization. For acquisition functions, always use .posterior() which properly accounts for parameter uncertainty.
Source code in src/psyphy/model/base.py
Wishart Psyochophysical Process Model (WPPM)¶
wppm
¶
wppm.py
Wishart Process Psychophysical Model (WPPM)
Goals
Wishart Process Psychophysical Model (WPPM):
- Expose hyperparameters needed to for example use Model config used in Hong et al.:
* extra_dims: embedding size for basis expansions
* variance_scale: global covariance scale
* decay_rate: smoothness/length-scale for covariance field
* diag_term: numerical stabilizer added to covariance diagonals
- Later, replace local_covariance with a basis-expansion Wishart process
and swap discriminability/likelihood with MC observer simulation.
All numerics use JAX (jax.numpy as jnp) to support autodiff and optax optimizers
Classes:
| Name | Description |
|---|---|
WPPM |
Wishart Process Psychophysical Model (WPPM). |
Attributes:
| Name | Type | Description |
|---|---|---|
Params |
|
|
Stimulus |
|
WPPM
¶
WPPM(
input_dim: int,
prior: Prior,
task: TaskLikelihood,
noise: Any | None = None,
*,
online_config: OnlineConfig | None = None,
extra_dims: int = 0,
variance_scale: float = 1.0,
decay_rate: float = 1.0,
diag_term: float = 1e-06,
**model_kwargs: Any,
)
Bases: Model
Wishart Process Psychophysical Model (WPPM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and probe live in R^{input_dim}. |
required |
prior
|
Prior
|
Prior distribution over model parameters. Controls basis_degree in WPPM (basis expansion). The WPPM delegates to prior.basis_degree to ensure consistency between parameter sampling and basis evaluation. |
required |
task
|
TaskLikelihood
|
Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask) |
required |
noise
|
Any
|
Noise model describing internal representation noise (e.g., GaussianNoise). |
None
|
Forward-compatible hyperparameters
extra_dims : int, default=0
Additional embedding dimensions for basis expansions (beyond input_dim).
embedding_dim = input_dim + extra_dims.
variance_scale : float, default=1.0
Global scaling factor for covariance magnitude
decay_rate : float, default=1.0
Smoothness/length-scale for spatial covariance variation
diag_term : float, default=1e-6
Small positive value added to the covariance diagonal for numerical stability.
online_config : OnlineConfig | None, optional (keyword-only)
Base-model lifecycle / online-learning policy. This is the supported way
to configure buffering and refit scheduling via Model.condition_on_observations.
model_kwargs : Any
Reserved for future keyword arguments accepted by the base Model.__init__.
Do not pass WPPM math knobs or task/likelihood knobs here.
Methods:
| Name | Description |
|---|---|
condition_on_observations |
Update model with new observations (online learning). |
discriminability |
Compute scalar discriminability d >= 0 for a (reference, probe) pair |
fit |
Fit model to data. |
init_params |
Sample initial parameters from the prior. |
local_covariance |
Return local covariance Σ(x) at stimulus location x. |
log_likelihood |
Compute the log-likelihood for arrays of trials. |
log_likelihood_from_data |
Compute log-likelihood directly from a ResponseData object. |
log_posterior_from_data |
Compute log posterior from data. |
posterior |
Return posterior distribution. |
predict_prob |
Predict probability of a correct response for a single stimulus. |
predict_with_params |
Evaluate model at specific parameter values (no marginalization). |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
Chebyshev polynomial degree for Wishart process basis expansion. |
decay_rate |
|
|
diag_term |
|
|
embedding_dim |
int
|
Dimension of the embedding space (perceptual space). |
extra_dims |
|
|
input_dim |
|
|
noise |
|
|
online_config |
|
|
prior |
|
|
task |
|
|
variance_scale |
|
Source code in src/psyphy/model/wppm.py
basis_degree
¶
basis_degree: int | None
Chebyshev polynomial degree for Wishart process basis expansion.
This property delegates to self.prior.basis_degree to ensure consistency between parameter sampling and basis evaluation.
Returns:
| Type | Description |
|---|---|
int | None
|
Degree of Chebyshev polynomial basis (0 = constant, 1 = linear, etc.) |
Notes
WPPM gets its basis_degree parameter from Prior.basis_degree.
embedding_dim
¶
embedding_dim: int
Dimension of the embedding space (perceptual space).
embedding_dim = input_dim + extra_dims. this represents the full perceptual space where: - First input_dim dimensions correspond to observable stimulus features - Remaining extra_dims are latent dimensions
Returns:
| Type | Description |
|---|---|
int
|
input_dim + extra_dims |
Notes
This is a computed property, not a constructor parameter.
condition_on_observations
¶
condition_on_observations(X: ndarray, y: ndarray) -> Model
Update model with new observations (online learning).
Behavior depends on self.online_config.strategy: - "full": Accumulate all data, refit periodically - "sliding_window": Keep only recent window_size trials - "reservoir": Random sampling of window_size trials - "none": Refit from scratch (no caching)
Returns a NEW model instance (immutable update).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
New stimuli |
required |
y
|
ndarray
|
New responses |
required |
Returns:
| Type | Description |
|---|---|
Model
|
Updated model (new instance) |
Examples:
Source code in src/psyphy/model/base.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 | |
discriminability
¶
Compute scalar discriminability d >= 0 for a (reference, probe) pair
WPPM (rectangular U design) if extra_dims > 0: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) where Σ(ref) is directly computed in stimulus space (input_dim, input_dim) via U(x) @ U(x)^T with U rectangular.
The discrimination task only depends on observable stimulus dimensions. The rectangular U design means local_covariance() already returns the stimulus covariance - no block extraction needed.
WPPM: d is implicit via Monte Carlo simulation of internal noisy responses under the task's decision rule (no closed form). In that case, tasks will directly implement predict/loglik with MC, and this method may be used only for diagnostics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
stimulus
|
tuple
|
(reference, probe) arrays of shape (input_dim,). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
d |
ndarray
|
Nonnegative scalar discriminability. |
Source code in src/psyphy/model/wppm.py
fit
¶
fit(
X: ndarray,
y: ndarray,
*,
inference: InferenceEngine | str = "laplace",
inference_config: dict | None = None,
) -> Model
Fit model to data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Stimuli, shape (n_trials, 2, input_dim) for (ref, probe) pairs or (n_trials, input_dim) for references only |
required |
y
|
ndarray
|
Responses, shape (n_trials,) |
required |
inference
|
InferenceEngine | str
|
Inference engine or string key ("map", "laplace", "langevin") |
"laplace"
|
inference_config
|
dict | None
|
Hyperparameters for string-based inference. Examples: {"steps": 500, "lr": 1e-3} for MAP |
None
|
Returns:
| Type | Description |
|---|---|
Model
|
Self for method chaining |
Examples:
Source code in src/psyphy/model/base.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | |
init_params
¶
init_params(key: Array) -> Params
local_covariance
¶
local_covariance(params: Params, x: ndarray) -> ndarray
Return local covariance Σ(x) at stimulus location x.
Wishart mode (basis_degree set): Σ(x) = U(x) @ U(x)^T + diag_term * I where U(x) is rectangular (input_dim, embedding_dim) if extra_dims > 0. - Varies smoothly with x - Guaranteed positive-definite - Returns stimulus covariance directly (input_dim, input_dim)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters: - WPPM: {"W": (degree+1, ..., input_dim, embedding_dim)} |
required |
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
Σ : jnp.ndarray, shape (input_dim, input_dim)
|
Covariance matrix in stimulus space. |
Source code in src/psyphy/model/wppm.py
log_likelihood
¶
log_likelihood(
params: Params,
refs: ndarray,
probes: ndarray,
responses: ndarray,
) -> ndarray
Compute the log-likelihood for arrays of trials.
IMPORTANT: We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV) or MC likelihood logic in multiple places.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
refs
|
(ndarray, shape(N, input_dim))
|
|
required |
probes
|
(ndarray, shape(N, input_dim))
|
|
required |
responses
|
(ndarray, shape(N))
|
Typically 0/1; task may support richer encodings. |
required |
Returns:
| Name | Type | Description | ||
|---|---|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed)
|
Source code in src/psyphy/model/wppm.py
log_likelihood_from_data
¶
Compute log-likelihood directly from a ResponseData object.
Why delegate to the task? - The task knows the decision rule (oddity, 2AFC, ...). - The task can use the model (this WPPM) to fetch discriminabilities. - The task can use the noise model if it needs MC simulation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
ResponseData
|
Collected trial data. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed). |
Source code in src/psyphy/model/wppm.py
log_posterior_from_data
¶
Compute log posterior from data.
This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar log posterior = loglik(params | data) + log_prior(params). |
Source code in src/psyphy/model/wppm.py
posterior
¶
posterior(
X: ndarray | None = None,
*,
probes: ndarray | None = None,
kind: str = "predictive",
) -> PredictivePosterior | ParameterPosterior
Return posterior distribution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray | None
|
Test stimuli (references), shape (n_test, input_dim). Required for predictive posteriors, optional for parameter posteriors. |
None
|
probes
|
ndarray | None
|
Test probes, shape (n_test, input_dim). Required for predictive posteriors. |
None
|
kind
|
('predictive', 'parameter')
|
Type of posterior to return: - "predictive": PredictivePosterior over f(X*) [for acquisitions] - "parameter": ParameterPosterior over θ [for diagnostics] |
"predictive"
|
Returns:
| Type | Description |
|---|---|
PredictivePosterior | ParameterPosterior
|
Posterior distribution |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If model has not been fit yet |
Examples:
Source code in src/psyphy/model/base.py
predict_prob
¶
Predict probability of a correct response for a single stimulus.
Design choice: WPPM computes discriminability & covariance; the TASK defines how that translates to performance. We therefore delegate to: task.predict(params, stimulus, model=self, noise=self.noise)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
|
required |
stimulus
|
(reference, probe)
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
p_correct |
ndarray
|
|
Source code in src/psyphy/model/wppm.py
predict_with_params
¶
Evaluate model at specific parameter values (no marginalization).
This is useful for: - Threshold uncertainty estimation (evaluate at sampled parameters) - Parameter sensitivity analysis - Debugging and diagnostics
NOT for making predictions (use .posterior() instead, which marginalizes over parameter uncertainty).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(ndarray, shape(n_test, input_dim))
|
Test stimuli (references) |
required |
probes
|
(ndarray, shape(n_test, input_dim))
|
Probe stimuli (for discrimination tasks) |
required |
params
|
dict[str, ndarray]
|
Specific parameter values to evaluate at. Keys and shapes depend on the model (e.g., WPPM has "W", "noise_scale", etc.) |
required |
Returns:
| Name | Type | Description |
|---|---|---|
predictions |
(ndarray, shape(n_test))
|
Predicted probabilities at each test point, given these parameters |
Examples:
Notes
This bypasses the posterior marginalization. For acquisition functions, always use .posterior() which properly accounts for parameter uncertainty.
Source code in src/psyphy/model/base.py
Priors¶
prior
¶
prior.py
Prior distributions for WPPM parameters
Hyperparameters: * variance_scale : global scaling factor for covariance magnitude * decay_rate : smoothness controlling spatial variation * extra_embedding_dims : embedding dimension for basis expansions
Connections
- WPPM calls Prior.sample_params() to initialize model parameters
- WPPM adds Prior.log_prob(params) to task log-likelihoods to form the log posterior
- Prior will generate structured parameters for basis expansions and decay_rate-controlled smooth covariance fields
Classes:
| Name | Description |
|---|---|
Prior |
Prior distribution over WPPM parameters |
Attributes:
| Name | Type | Description |
|---|---|---|
Params |
|
Prior
¶
Prior(
input_dim: int,
basis_degree: int | None = None,
variance_scale: float = 1.0,
decay_rate: float = 0.5,
extra_embedding_dims: int = 0,
)
Prior distribution over WPPM parameters
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the model space (same as WPPM.input_dim) |
required |
basis_degree
|
int | None
|
Degree of Chebyshev basis for Wishart process. If set, uses Wishart mode with W coefficients. |
None
|
variance_scale
|
float
|
Prior variance for degree-0 (constant) coefficient in Wishart mode. Controls overall scale of covariances. |
1.0
|
decay_rate
|
float
|
Geometric decay rate for prior variance over higher-degree coefficients. Prior variance for degree-d coefficient = variance_scale * (decay_rate^d). Smaller decay_rate -> stronger smoothness prior. |
0.5
|
extra_embedding_dims
|
int
|
Additional latent dimensions in U matrices beyond input dimensions. Allows richer ellipsoid shapes in Wishart mode. |
0
|
Methods:
| Name | Description |
|---|---|
log_prob |
Compute log prior density (up to a constant) |
sample_params |
Sample initial parameters from the prior. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
|
decay_rate |
float
|
|
extra_embedding_dims |
int
|
|
input_dim |
int
|
|
variance_scale |
float
|
|
log_prob
¶
log_prob(params: Params) -> ndarray
Compute log prior density (up to a constant)
Gaussian prior on W with smoothness via decay_rate log p(W) = Σ_ij log N(W_ij | 0, σ_ij^2) where σ_ij^2 = prior variance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Name | Type | Description |
|---|---|---|
log_prob |
float
|
Log prior probability (up to normalizing constant) |
Source code in src/psyphy/model/prior.py
sample_params
¶
Sample initial parameters from the prior.
Returns {"W": shape (degree+1, degree+1, input_dim, embedding_dim)} for 2D, where embedding_dim = input_dim + extra_embedding_dims
Note: The 3rd dimension is input_dim (output space dimension). This matches the einsum in _compute_sqrt: U = einsum("ijde,ij->de", W, phi) where d indexes input_dim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
params |
dict
|
Parameter dictionary |
Source code in src/psyphy/model/prior.py
Noise¶
noise
¶
Classes:
| Name | Description |
|---|---|
GaussianNoise |
|
StudentTNoise |
|
GaussianNoise
¶
GaussianNoise(sigma: float = 1.0)
Tasks¶
task
¶
psyphy.model.task
Task likelihoods for psychophysical experiments.
This module defines task-specific mappings from a model (e.g., WPPM) and stimuli to response likelihoods.
Current direction
OddityTask: the log-likelihood is computed via Monte Carlo observer
simulation of the full 3-stimulus oddity decision rule (two identical references,
one comparison).
The public API is:
-
TaskLikelihood.predict(params, stimuli, model, noise)Optional fast predictor for p(correct). For MC-only tasks this may be unimplemented. -
TaskLikelihood.loglik(params, data, model, noise, **kwargs)Compute log-likelihood of observed responses under this task.
Connections
- WPPM delegates to the task to compute likelihood.
- Noise models are passed through so tasks can simulate observer responses.
Classes:
| Name | Description |
|---|---|
OddityTask |
Three-alternative forced-choice oddity task (MC-based only). |
OddityTaskConfig |
Configuration for :class: |
TaskLikelihood |
Abstract base class for task likelihoods |
Attributes:
| Name | Type | Description |
|---|---|---|
Stimulus |
|
OddityTask
¶
OddityTask(config: OddityTaskConfig | None = None)
Bases: TaskLikelihood
Three-alternative forced-choice oddity task (MC-based only).
Implements the full 3-stimulus oddity task using Monte Carlo simulation: - Samples three internal representations per trial (z0, z1, z2) - Uses proper oddity decision rule with three pairwise distances - Suitable for complex covariance structures
Notes
MC simulation in loglik() (full 3-stimulus oddity): 1. Sample three internal representations: z_ref, z_refprime ~ N(ref, Σ_ref), z_comparison ~ N(comparison, Σ_comparison) 2. Compute average covariance: Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison 3. Compute three pairwise Mahalanobis distances: - d^2(z_ref, z_refprime) = distance between two reference samples - d^2(z_ref, z_comparison) = distance from ref to comparison - d^2(z_refprime, z_comparison) = distance from reference_prime to comparison 4. Apply oddity decision rule: delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime) 5. Logistic smoothing: P(correct) pprox logistic.cdf(delta / bandwidth) 6. Average over samples
Examples:
Methods:
| Name | Description | ||
|---|---|---|---|
loglik |
|
||
predict |
Predict p(correct) for a single (ref, comparison) stimulus. |
Attributes:
| Name | Type | Description |
|---|---|---|
config |
|
Source code in src/psyphy/model/task.py
loglik
¶
1 2 3 4 5 6 7 8 9 10 11 | |
1Parameters
1 | |
1 2 3 4 5 6 7 8 9 10 11 | |
1Returns
1 | |
1 2 3 | |
1Raises
1 | |
1 2 3 4 | |
1Notes
1 | |
1 2 3 4 5 6 | |
1Notes
1 | |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | |
- Can be JIT-compiled for additional speed (future optimization)
1Examples
1 | |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | |
Source code in src/psyphy/model/task.py
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 | |
predict
¶
Predict p(correct) for a single (ref, comparison) stimulus.
Even though OddityTask is MC-only, we still implement predict.
Reason: large parts of the library (posterior predictive, acquisition
functions, diagnostics, etc.) need a forward model that returns
p(correct) at candidate stimuli. Historically this used an analytical
approximation, but in MC-only mode we compute it via simulation.
Notes
- This method is intentionally lightweight: it performs the same
single-trial Monte Carlo simulation used by
loglik. - If you need to control MC fidelity/smoothing, setOddityTaskConfig(num_samples=..., bandwidth=...)when you construct the task. - If you need reproducible randomness, passkey=...tologlik.
Source code in src/psyphy/model/task.py
OddityTaskConfig
¶
Configuration for :class:OddityTask.
This is the single source of truth for MC likelihood controls.
Attributes:
| Name | Type | Description |
|---|---|---|
num_samples |
int
|
Number of Monte Carlo samples per trial. |
bandwidth |
float
|
Logistic CDF smoothing bandwidth. |
default_key_seed |
int
|
Seed used when no key is provided (keeps behavior deterministic by default while allowing reproducibility control upstream). |
TaskLikelihood
¶
Bases: ABC
Abstract base class for task likelihoods
Methods:
| Name | Description |
|---|---|
loglik |
Compute log-likelihood of observed responses under this task. |
predict |
Predict probability of correct response for a stimulus. |
loglik
¶
Compute log-likelihood of observed responses under this task.
Why **kwargs?
- Different tasks may need different optional runtime controls.
- MC-based tasks may need parameters such as a PRNG key.
In particular, :class:OddityTask takes Monte Carlo controls
(num_samples and bandwidth) exclusively from
:class:OddityTaskConfig to avoid silent mismatch bugs.
Notes
- Task implementations should document which kwargs they accept.
- Callers should not assume arbitrary kwargs are supported.