The Inference Engines

Agents should be able to learn (infer parameters from observed data) and to adapt (change policy parameters based on observed data). As in observation engines, there might be a cost associated with making inferences:

  • Making an inference can be time costly.

  • Inferring may be rewarding; for example, because it is enjoyable.

CoopIHC provides a generic object called inference engines to update internal states from observations. Although the name might suggest otherwise, these engines may use (any) other mechanisms than statistical inference that update the internal state. To create a new inference engine, you can base it off an existing engine or subclass the BaseInferenceEngine.

Subclassing BaseInferenceEngine

Essentially, the BaseInferenceEngine provides a simple first-in-first-out (FIFO) buffer that stores observations. When subclassing BaseInferenceEngine, you simply have to redefine the infer method (by default, no inference is produced). An example is provided below, where the engine stores the last 5 observations.

 1class ExampleInferenceEngine(BaseInferenceEngine):
 2    """ExampleInferenceEngine
 3
 4    Example class
 5
 6    """
 7
 8    def __init__(self, *args, **kwargs):
 9        super().__init__(*args, **kwargs)
10
11    def infer(self, agent_observation=None):
12        """infer
13
14        Do nothing. Same behavior as parent ``BaseInferenceEngine``
15
16        :return: (new internal state, reward)
17        :rtype: tuple(:py:class:`State<coopihc.base.State.State>`, float)
18        """
19        if agent_observation is None:
20            agent_state = self.state
21
22        reward = 0
23        # Do something
24        # agent_state = ..
25        # reward = ...
26
27        return agent_state, reward
28
29
30ExampleInferenceEngine(buffer_depth=5)

Combining Engines – CascadedInferenceEngine

It is sometimes useful to use several inference engine in a row (e.g. because you want to use two engines that target a different substate).

For this case, you can use the CascadedInferenceEngine:

first_inference_engine = ProvideLikelihoodInferenceEngine(perceptualnoise)
second_inference_engine = LinearGaussianContinuous()
inference_engine = CascadedInferenceEngine(
    [first_inference_engine, second_inference_engine]
)

Available Inference Engines

GoalInferenceWithUserModelGiven

Warning

outdated example

An inference Engine used by an assistant to infer the ‘goal’ of a user. The inference is based on a model of the user policy, which has to be provided to this engine.

Bayesian updating in the discrete case. Computes for each target \(\theta\) the associated posterior probability, given an observation \(x\) and the last user action \(y\):

\[P(\Theta = \theta | X=x, Y=y) = \frac{p(Y = y | \Theta = \theta, X=x)}{\sum_{\Theta} p(Y=y|\Theta = \theta, X=x)} P(\Theta = \theta).\]

This inference engine expects the likelihood model \(p(Y = y | \Theta = \theta, X=x)\) to be supplied:

# Define the likelihood model for the user policy
# user_policy_model = XXX

inference_engine = GoalInferenceWithUserPolicyGiven()
# Attach it to the engine
inference_engine._attach_policy(user_policy_model)

It also expects that the set of \(\theta\)’s is supplied:

set_theta = [
    {
        ("user_state", "goal"): StateElement(
            t,
            discrete_space(numpy.array(list(range(self.bundle.task.gridsize)))),
        )
    }
    for t in self.bundle.task.state["targets"]
]

inference_engine.attach_set_theta(set_theta)

You can find a full worked-out example in CoopIHC-Zoo’s pointing module.

LinearGaussianContinuous

An Inference Engine that handles a continuous Gaussian Belief. It assumes a Gaussian prior and a Gaussian likelihood.

  • Expectations of the engine

    This inference engine expects the agent to have in its internal state:

    • The mean matrix of the belief, stored as ‘belief-mu’

    • The covariance matrix of the belief, stored as ‘belief-sigma’

    • The new observation, stored as ‘y’

    • The covariance matrix associated with the observation, stored as ‘Sigma_0’

  • Inference

    This engine uses the observation to update the beliefs (which has been computed from previous observations).

    To do so, a Gaussian noisy observation model is assumed, where x is the latest mean matrix of the belief.

    \[\begin{align} p(y|x) \sim \mathcal{N}(x, \Sigma_0) \end{align}\]

    If the initial prior (belief probability) is Gaussian as well, then the posterior will remain Gaussian (because we are only applying linear operations to Gaussians, Gaussianity is preserved). So the posterior after t-1 observations has the following form, where \((\\mu(t-1), \\Sigma(t-1))\) are respectively the mean and covariance matrices of the beliefs.

    \[\begin{split}\begin{align} p(x(t-1)) \\sim \mathcal{N}(\mu(t-1), \Sigma(t-1)) \end{align}\end{split}\]

    On each new observation, the mean and covariance matrices are updated like so:

    \[\begin{split}\begin{align} p(x(t) | y, x(t-1)) \sim \mathcal{N}(\Sigma(t) \left[ \Sigma_0^{-1}y + \Sigma(t-1) \mu(t-1) \right], \Sigma(t)) \\ \Sigma(t) = (\Sigma_0^{-1} + \Sigma(t-1)^{-1})^{-1} \end{align}\end{split}\]
  • Render

    —- plot mode:

    This engine will plot mean beliefs on the task axis and the covariance beliefs on the agent axis, plotted as confidence intervals (bars for 1D and ellipses for 2D).

  • Example files

    coopihczoo.eye.users

ContinuousKalmanUpdate

LQG update, not documented yet, see API Reference