Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About regression loss for steering prediction #4

Open
godspeed1989 opened this issue Nov 11, 2020 · 2 comments
Open

About regression loss for steering prediction #4

godspeed1989 opened this issue Nov 11, 2020 · 2 comments

Comments

@godspeed1989
Copy link

godspeed1989 commented Nov 11, 2020

Thanks for your great work.
In your paper:
image
To train DroNet for steering prediction, did you just use MSE for supervision?

Can you paste the part of code to calculate training loss and evaluation metrics?
The release code just contains SoftmaxHeteroscedasticLoss for classification in CIFAR.

@mattiasegu
Copy link
Contributor

mattiasegu commented Nov 21, 2020

Hi @godspeed1989

I am glad to know that you appreciate our work!

The loss to train our network is literally torch.nn.functional.mse(outputs,targets), plus L2 regularization on model weights.
To evaluate the network, we use RMSE, EVA and NLL.

EVA:

def explained_variance_1d(ypred,y):
    """
    Var[ypred - y] / var[y].
    https://www.quora.com/What-is-the-meaning-proportion-of-variance-explained-in-linear-regression
    """
    assert y.ndim == 1 and ypred.ndim == 1
    vary = np.var(y)
    return np.nan if vary==0 else 1 - np.var(y-ypred)/vary

def compute_explained_variance(predictions, real_values):
    """
    Computes the explained variance of prediction for each
    steering and the average of them
    """
    assert np.all(predictions.shape == real_values.shape)
    ex_variance = explained_variance_1d(predictions, real_values)
    print("EVA = {}".format(ex_variance))
    return ex_variance

RMSE:

def compute_rmse(predictions, real_values):
    assert np.all(predictions.shape == real_values.shape)
    mse = np.mean(np.square(predictions - real_values))
    rmse = np.sqrt(mse)
    print("RMSE = {}".format(rmse))
    return rmse

Log-Likelihood:

def log_likelihood(y_pred, y_true, sigma):
    y_true = torch.Tensor(y_true)
    y_pred= torch.Tensor(y_pred)
    sigma = torch.Tensor(sigma)
    
    dist = torch.distributions.normal.Normal(loc=y_pred, scale=sigma)
    ll = torch.mean(dist.log_prob(y_true))
    ll = np.asscalar(ll.numpy())
    return ll

I hope these functions can be helpful!
Cheers

@godspeed1989
Copy link
Author

godspeed1989 commented Nov 23, 2020

Hi @mattiasegu
Thanks for your reply.
One more question ;)
I am still confusing about how we can estimate aleatoric uncertainty (i.e., output var) without a specific supervision.
The output var is used in SoftmaxHeteroscedasticLoss.

In my mind, the steering prediction is a regression problem.
From ADF's original paper Lightweight Probabilistic Deep Networks, there is a probabilistic analog for regression by minimizing negative log likelihood:
image
So, why can't we add this as a part of the learning target?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants