Note
Click here to download the full example code
Fit injected current
Data for this notebook is a patch clamp experiment with a mouse V1 neuron, from the Allen Brain Atlas
Learning objectives
- Learn how to explore spiking data and do basic analyses using pynapple
- Learn how to structure data for NeMoS
- Learn how to fit a basic Generalized Linear Model using NeMoS
- Learn how to retrieve the parameters and predictions from a fit GLM for intrepetation.
# Import everything
import jax
import os
import matplotlib.pyplot as plt
import nemos as nmo
import nemos.glm
import numpy as np
import pynapple as nap
import workshop_utils
# configure plots some
plt.style.use(workshop_utils.STYLE_FILE)
Data Streaming
- Stream the data. Format is Neurodata Without Borders (NWB) standard
path = workshop_utils.fetch_data("allen_478498617.nwb")
Pynapple
Data structures and preparation
- Open the NWB file with pynapple
data = nap.load_file(path)
print(data)
stimulus
: Tsd containing injected current, in Amperes, sampled at 20k Hz.response
: Tsd containing the neuron's intracellular voltage, sampled at 20k Hz.units
: Tsgroup, dictionary of neurons, holding each neuron's spike timestamps.epochs
: IntervalSet, dictionary with start and end times of different intervals, defining the experimental structure, specifying when each stimulation protocol began and ended.
epochs = data["epochs"]
epochs.keys()
Noise 1
: epochs of random noise
noise_interval = epochs["Noise 1"]
noise_interval
- Let's focus on the first epoch.
noise_interval = noise_interval[0]
noise_interval
current
: Tsd (TimeSeriesData) : time index + data
# convert current from Ampere to pico-amperes, to match the Allen Institute figures and
# move the values to a more reasonable range.
current = data["stimulus"] * 1e12
current
restrict
: restricts a time series object to a set of time intervals delimited by an IntervalSet object
current = current.restrict(noise_interval)
current
TsGroup
: a custom dictionary holding multipleTs
(timeseries) objects with potentially different time indices.
spikes = data["units"]
spikes
We can index into the TsGroup
to see the timestamps for this neuron's
spikes:
spikes[0]
Let's restrict to the same epoch noise_interval
:
spikes = spikes.restrict(noise_interval)
print(spikes)
spikes[0]
Let's visualize the data from this trial:
fig, ax = plt.subplots(1, 1, figsize=(8, 2))
ax.plot(current, "grey")
ax.plot(spikes.to_tsd([-5]), "|", color="k", ms = 10)
ax.set_ylabel("Current (pA)")
ax.set_xlabel("Time (s)")
Basic analyses
The Generalized Linear Model gives a predicted firing rate. First we can use pynapple to visualize this firing rate for a single trial.
count
: count the number of events withinbin_size
# bin size in seconds
bin_size = 0.001
# Get spikes for neuron 0
count = spikes[0].count(bin_size)
count
Let's convert the spike counts to firing rate :
smooth
: convolve with a Gaussian kernel
# the inputs to this function are the standard deviation of the gaussian in seconds and
# the full width of the window, in standard deviations. So std=.05 and size_factor=20
# gives a total filter size of 0.05 sec * 20 = 1 sec.
firing_rate = count.smooth(std=.05, size_factor=20)
# convert from spikes per bin to spikes per second (Hz)
firing_rate = firing_rate / bin_size
# we're hiding the details of the plotting function for the purposes of this
# tutorial, but you can find it in the associated github repo if you're
# interested:
# https://github.com/flatironinstitute/ccn-workshop-fens-2024/blob/main/src/workshop_utils/plotting.py
workshop_utils.plotting.current_injection_plot(current, spikes, firing_rate)
What is the relationship between the current and the spiking activity?
compute_1d_tuning_curves
: compute the firing rate as a function of a 1-dimensional feature.
tuning_curve = nap.compute_1d_tuning_curves(spikes, current, nb_bins=15)
tuning_curve
Let's plot the tuning curve of the neuron.
workshop_utils.plotting.tuning_curve_plot(tuning_curve)
NeMoS
Preparing data
Get data from pynapple to NeMoS-ready format:
- predictors and spikes must have same number of time points
binned_current = current.bin_average(bin_size)
print(f"current shape: {binned_current.shape}")
# rate is in Hz, convert to KHz
print(f"current sampling rate: {binned_current.rate/1000.:.02f} KHz")
print(f"\ncount shape: {count.shape}")
print(f"count sampling rate: {count.rate/1000:.02f} KHz")
- predictors must be 2d, spikes 1d
# make sure predictor is 2d
predictor = np.expand_dims(binned_current, 1)
# check that the dimensionality matches NeMoS expectation
print(f"predictor shape: {predictor.shape}")
print(f"count shape: {count.shape}")
Fitting the model
- GLM objects need regularizers and observation models
model = nmo.glm.GLM(regularizer=nmo.regularizer.UnRegularized(solver_name="LBFGS"))
- call fit and retrieve parameters
model.fit(predictor, count)
print(f"firing_rate(t) = exp({model.coef_} * current(t) + {model.intercept_})")
print(f"coef_ shape: {model.coef_.shape}")
print(f"intercept_ shape: {model.intercept_.shape}")
- generate and examine model predictions.
predicted_fr = model.predict(predictor)
# convert units from spikes/bin to spikes/sec
predicted_fr = predicted_fr / bin_size
# and let's smooth the firing rate the same way that we smoothed the
# spike train
smooth_predicted_fr = predicted_fr.smooth(.05, size_factor=20)
# and plot!
workshop_utils.plotting.current_injection_plot(current, spikes, firing_rate,
smooth_predicted_fr)
- what do we see?
# compare observed mean firing rate with the model predicted one
print(f"Observed mean firing rate: {np.mean(count) / bin_size} Hz")
print(f"Predicted mean firing rate: {np.mean(predicted_fr)} Hz")
- examine tuning curve — what do we see?
tuning_curve_model = nap.compute_1d_tuning_curves_continuous(predicted_fr, current, 15)
fig = workshop_utils.plotting.tuning_curve_plot(tuning_curve)
fig.axes[0].plot(tuning_curve_model, color="tomato", label="glm")
fig.axes[0].legend()
Extending the model
- choose a length of time over which the neuron integrates the input current
current_history_duration_sec = .2
# convert this from sec to bins
current_history_duration = int(current_history_duration_sec / bin_size)
binned_current[1:]
binned_current[2:]
# etc
workshop_utils.plotting.plot_basis()
- define a basis object
basis = nmo.basis.RaisedCosineBasisLog(
n_basis_funcs=10, mode="conv", window_size=current_history_duration,
)
- create the design matrix
- examine the features it contains
# under the hood, this convolves the input with the filter bank visualized above
current_history = basis.compute_features(binned_current)
print(current_history)
# in this plot, we're normalizing the amplitudes to make the comparison easier --
# the amplitude of these features will be fit by the model, so their un-scaled
# amplitudes is not informative
workshop_utils.plotting.plot_current_history_features(binned_current, current_history, basis,
current_history_duration_sec)
- create and fit the GLM
- examine the parameters
history_model = nmo.glm.GLM(regularizer=nmo.regularizer.UnRegularized(solver_name="LBFGS"))
history_model.fit(current_history, count)
print(f"firing_rate(t) = exp({history_model.coef_} * current(t) + {history_model.intercept_})")
print(history_model.coef_.shape)
print(history_model.intercept_.shape)
- compare the predicted firing rate to the data and the old model
- what do we see?
# all this code is the same as above
history_pred_fr = history_model.predict(current_history)
history_pred_fr = history_pred_fr / bin_size
smooth_history_pred_fr = history_pred_fr.dropna().smooth(.05, size_factor=20)
workshop_utils.plotting.current_injection_plot(current, spikes, firing_rate,
# compare against the old firing rate
smooth_history_pred_fr, smooth_predicted_fr)
- examine the predicted average firing rate and tuning curve
- what do we see?
# compare observed mean firing rate with the history_model predicted one
print(f"Observed mean firing rate: {np.mean(count) / bin_size} Hz")
print(f"Predicted mean firing rate (instantaneous current): {np.nanmean(predicted_fr)} Hz")
print(f"Predicted mean firing rate (current history): {np.nanmean(smooth_history_pred_fr)} Hz")
tuning_curve_history_model = nap.compute_1d_tuning_curves_continuous(smooth_history_pred_fr, current, 15)
fig = workshop_utils.plotting.tuning_curve_plot(tuning_curve)
fig.axes[0].plot(tuning_curve_history_model, color="tomato", label="glm (current history)")
fig.axes[0].plot(tuning_curve_model, color="tomato", linestyle='--', label="glm (instantaneous current)")
fig.axes[0].legend()
- use log-likelihood to compare models
log_likelihood = model.score(predictor, count, score_type="log-likelihood")
print(f"log-likelihood (instantaneous current): {log_likelihood}")
log_likelihood = history_model.score(current_history, count, score_type="log-likelihood")
print(f"log-likelihood (current history): {log_likelihood}")
Finishing up
- what if you want to compare models across datasets?
r2 = model.score(predictor, count, score_type='pseudo-r2-Cohen')
print(f"pseudo-r2 (instantaneous current): {r2}")
r2 = history_model.score(current_history, count, score_type='pseudo-r2-Cohen')
print(f"pseudo-r2 (current history): {r2}")
- what about spiking?
spikes = jax.random.poisson(jax.random.PRNGKey(123), predicted_fr.values)
Further Exercises
- what else can we do?
Other stimulation protocols
Train and test sets
Model extensions
Data citation
The data used in this tutorial is from the Allen Brain Map, with the following citation:
Contributors: Agata Budzillo, Bosiljka Tasic, Brian R. Lee, Fahimeh Baftizadeh, Gabe Murphy, Hongkui Zeng, Jim Berg, Nathan Gouwens, Rachel Dalley, Staci A. Sorensen, Tim Jarsky, Uygar SΓΌmbΓΌl Zizhen Yao
Dataset: Allen Institute for Brain Science (2020). Allen Cell Types Database -- Mouse Patch-seq [dataset]. Available from brain-map.org/explore/classes/multimodal-characterization.
Primary publication: Gouwens, N.W., Sorensen, S.A., et al. (2020). Integrated morphoelectric and transcriptomic classification of cortical GABAergic cells. Cell, 183(4), 935-953.E19. https://doi.org/10.1016/j.cell.2020.09.057
Patch-seq protocol: Lee, B. R., Budzillo, A., et al. (2021). Scaled, high fidelity electrophysiological, morphological, and transcriptomic cell characterization. eLife, 2021;10:e65482. https://doi.org/10.7554/eLife.65482
Mouse VISp L2/3 glutamatergic neurons: Berg, J., Sorensen, S. A., Miller, J., Ting, J., et al. (2021) Human neocortical expansion involves glutamatergic neuron diversification. Nature, 598(7879):151-158. doi: 10.1038/s41586-021-03813-8
Total running time of the script: ( 0 minutes 0.000 seconds)
Download Python source code: 05_current_injection_code.py