You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am testing KISS-GP on dimensions ranging from 2 to 4. I noticed that you mentioned the method performs well with 100,000 data points, but in my actual use, I encountered out-of-memory issues when the number of data points exceeded 20,000. My testing machine has 190GB of memory, and I would like to know if this behavior aligns with your expectations. Below is an example: dimension = 3, n = 50,000.
required imports
import math
import torch
import gpytorch
import numpy as np
import pandas as pd
n = 50000 # of pts
d = 3 # dimension of each point
train_x = np.random.rand(n, d)
train_x = torch.from_numpy(train_x).float()
,
def train():
for i in range(training_iterations):
optimizer.zero_grad()
print(i)
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
optimizer.step()
train()
The text was updated successfully, but these errors were encountered:
Hello, I am testing KISS-GP on dimensions ranging from 2 to 4. I noticed that you mentioned the method performs well with 100,000 data points, but in my actual use, I encountered out-of-memory issues when the number of data points exceeded 20,000. My testing machine has 190GB of memory, and I would like to know if this behavior aligns with your expectations. Below is an example: dimension = 3, n = 50,000.
required imports
import math
import torch
import gpytorch
import numpy as np
import pandas as pd
n = 50000 # of pts
d = 3 # dimension of each point
train_x = np.random.rand(n, d)
train_x = torch.from_numpy(train_x).float()
,
True function is sin( 2pi(x0+x1+...xn))
train_y = torch.sin(train_x.sum(axis=1) * (2 * math.pi)) + torch.randn_like(train_x[:, 0]).mul(0.01)
class GPRegressionModel(gpytorch.models.ExactGP):
def init(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).init(train_x, train_y, likelihood)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPRegressionModel(train_x, train_y, likelihood)
Find optimal model hyperparameters
model.train()
likelihood.train()
Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
"Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
training_iterations = 50
def train():
for i in range(training_iterations):
optimizer.zero_grad()
print(i)
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
optimizer.step()
train()
The text was updated successfully, but these errors were encountered: