You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am writing a simple example for a GP with instrinsic coregionalization kernel for two tasks. I understand that the model calculates the posterior mean and variance for each task separately according to the way that they are indexed in variable X when the model fitting happens. For example my tasks are indexed 0 and 1 so when calling m.predict( np.array([[3, 0]]) ) it should return the posterior mean and variance of task 0 at input 3. However as you can see in the following code it returns a posterior mean and variance even when I put there an index that is not asssociated to any task (i.e., "-1" and "-2"). Where do those posterior means and variances come from?
import GPy
import numpy as np
import matplotlib.pyplot as plt
X = np.array([[ 1, 0], [ 2, 0], [ 3, 0], [ 4, 0], [ 1, 1], [ 2, 1], [ 3, 1], [ 4, 1]])
Y = np.array([[-0.23632664], [0.67736248], [-1.71536046], [2.01622779],
[3.51775273], [-2.78539508], [2.19422611], [1.13307012]])
kernel = GPy.kern.RBF(input_dim=1, variance=1, lengthscale=1)
kernel_IC = kernel**GPy.kern.Coregionalize(input_dim=1, output_dim=2, rank=1)
m = GPy.models.GPRegression(X, Y, kernel_IC, normalizer=True)
m.Gaussian_noise.constrain_fixed(1)
# These are the posterior predictions for each task according to the provided input
mu_task_1, var_task_1 = m.predict(np.array([[ i, 0] for i in np.linspace(1, 4, 20) ]) )
mu_task_2, var_task_2 = m.predict(np.array([[ i, 1] for i in np.linspace(1, 4, 20) ]) )
# What is happening here, to which task do the posteriors mu and var belong?
mu_task_unknown_1, var_task_unknown_1 = m.predict(np.array([[ i, -1] for i in np.linspace(1, 4, 20) ]) )
mu_task_unknown_2, var_task_unknown_2 = m.predict(np.array([[ i, -2] for i in np.linspace(1, 4, 20) ]) )
plt.plot(np.linspace(1, 4, 20), mu_task_1, label='mu task 1', c='r')
plt.plot(np.linspace(1, 4, 20), mu_task_2, label='mu task 2', c='b')
plt.plot(np.linspace(1, 4, 20), mu_task_unknown_1, label='mu task ?1', c='y')
plt.plot(np.linspace(1, 4, 20), mu_task_unknown_2, label='mu task ?2', c='orange')
plt.legend()
The text was updated successfully, but these errors were encountered:
sorry can't dive in that deep atm but combining a vanilla GPy.models.GPRegression with a Coregionalize kernel seems suspicous to me, when there exists a GPy.models.GPCoregionalizedRegression version as shown in GPy/examples/cpregionalization_toy.py.
I am writing a simple example for a GP with instrinsic coregionalization kernel for two tasks. I understand that the model calculates the posterior mean and variance for each task separately according to the way that they are indexed in variable X when the model fitting happens. For example my tasks are indexed 0 and 1 so when calling m.predict( np.array([[3, 0]]) ) it should return the posterior mean and variance of task 0 at input 3. However as you can see in the following code it returns a posterior mean and variance even when I put there an index that is not asssociated to any task (i.e., "-1" and "-2"). Where do those posterior means and variances come from?
The text was updated successfully, but these errors were encountered: