ACT-R
The ACT-R model
We use one of the ACT-R model described in [1]. The model reads
\[\begin{split}m & = \log \left( \sum_{i=1}^n \Delta t_i^{-d} \right) \\
p & = \left( 1 + \exp\left(\frac{\tau -m }{s}\right) \right)^{-1}\end{split}\]
where the activation, \(m\) , is a power function of the times \(\Delta t_i\) since the item \(i\) was presented, and the exponent \(d\) stands for delay, and \(s\) and \(\tau\) are parameters that add extra flexibility to the model. In comparison with the EF model where recall is just a function of when the last item was presented, the ACT-R model accounts for all past item presentations via \(m\).
The ACTR class
See the API Reference
Worked out Example with the ACT-R model
from pyrbit.mle_utils import CI_asymptotical, confidence_ellipse
from pyrbit.actr import (
ACTR,
diagnostics,
actr_observed_information_matrix,
identify_actr_from_recall_sequence,
gen_data,
)
import numpy
import matplotlib.pyplot as plt
SEED = None
N = 10000
d = 0.6
tau = -0.7
s = 0.25
rng = numpy.random.default_rng(seed=SEED)
# ==================== Simulate some data
actr = ACTR(1, 0.5, 0.25, -0.7, buffer_size=16, seed=SEED)
recalls, deltatis = gen_data(actr, N)
# ================= Run Diagnostics (logistic regression)
ax = diagnostics(
0.6,
deltatis,
recalls,
line_kws={"color": "green"},
recall_event_kwargs={"scatter_kws": {"marker": "*", "s": 5}},
)
ax.legend()
plt.tight_layout()
plt.show()
# ==================== Perform ML Estimation
optim_kwargs = {"method": "L-BFGS-B", "bounds": [(0, 1), (-5, 5), (-5, 5)]}
verbose = False
guess = (0.2, 0.5, -1)
# An illustration of identifying with basin_hopping. This slows down inference, as optimization is repeated 1+niter times
def _callable(x, f, accept):
print(x, f, accept)
inference_results = identify_actr_from_recall_sequence(
recalls,
deltatis,
optim_kwargs=optim_kwargs,
verbose=verbose,
guess=guess,
basin_hopping=True,
basin_hopping_kwargs={"niter": 3, "callback": _callable},
)
# see scipy.optimize doc for returned object with basinhopping
x = inference_results.lowest_optimization_result.x
# ================= computing Confidence Intervals and Ellipses
J = actr_observed_information_matrix(recalls, deltatis, *x)
# covariance matrix
covar = numpy.linalg.inv(J)
# Confidence intervals
cis = CI_asymptotical(covar, x)
# Confidence ellipses
# draw the three ellipses, because the confidence ellipsoid will be hard to interpret.
fig, axs = plt.subplots(nrows=1, ncols=3)
labels = [r"$d$", r"$s$", r"$\tau$"]
for n in range(3):
i = (n + 1) % 3
j = (n + 2) % 3
if i > j:
i, j = j, i
_cov = numpy.array([[covar[i, i], covar[i, j]], [covar[i, j], covar[j, j]]])
ax = confidence_ellipse((x[i], x[j]), _cov, ax=axs[n])
ax.set_xlabel(f"{labels[i]}")
ax.set_ylabel(f"{labels[j]}")
plt.tight_layout()
plt.show()