forked from open-mmlab/mmdetection3d
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Autoware compatibility #1
Merged
Merged
Changes from 3 commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
ff616ff
feat: use project-style
kaancolak 840ae1d
feat: rm create gt dataset
kaancolak 37fc84d
Update projects/AutowareCenterPoint/datasets/tier4_dataset.py
kaancolak 0238da8
feat: change Tier4 dataset name
kaancolak 934d97f
fix: refactor repo
kaancolak 90d15e9
fix: refactor files
kaancolak d167d59
feat: add test, docker file and readme
kaancolak efa2c8a
fix: fix flake
kaancolak File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
from .pillar_encoder_autoware import PillarFeatureNetAutoware | ||
|
||
__all__ = [ 'PillarFeatureNetAutoware'] |
165 changes: 165 additions & 0 deletions
165
projects/AutowareCenterPoint/centerpoint/pillar_encoder_autoware.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,165 @@ | ||
from typing import Optional, Tuple | ||
|
||
import torch | ||
from mmcv.cnn import build_norm_layer | ||
from mmcv.ops import DynamicScatter | ||
from torch import Tensor, nn | ||
|
||
from mmdet3d.registry import MODELS | ||
from mmdet3d.models.voxel_encoders.utils import PFNLayer, get_paddings_indicator | ||
@MODELS.register_module() | ||
class PillarFeatureNetAutoware(nn.Module): | ||
"""Pillar Feature Net. | ||
|
||
The network prepares the pillar features and performs forward pass | ||
through PFNLayers. | ||
|
||
Args: | ||
in_channels (int, optional): Number of input features, | ||
either x, y, z or x, y, z, r. Defaults to 4. | ||
feat_channels (tuple, optional): Number of features in each of the | ||
N PFNLayers. Defaults to (64, ). | ||
with_distance (bool, optional): Whether to include Euclidean distance | ||
to points. Defaults to False. | ||
with_cluster_center (bool, optional): [description]. Defaults to True. | ||
with_voxel_center (bool, optional): [description]. Defaults to True. | ||
voxel_size (tuple[float], optional): Size of voxels, only utilize x | ||
and y size. Defaults to (0.2, 0.2, 4). | ||
point_cloud_range (tuple[float], optional): Point cloud range, only | ||
utilizes x and y min. Defaults to (0, -40, -3, 70.4, 40, 1). | ||
norm_cfg ([type], optional): [description]. | ||
Defaults to dict(type='BN1d', eps=1e-3, momentum=0.01). | ||
mode (str, optional): The mode to gather point features. Options are | ||
'max' or 'avg'. Defaults to 'max'. | ||
legacy (bool, optional): Whether to use the new behavior or | ||
the original behavior. Defaults to True. | ||
""" | ||
|
||
def __init__(self, | ||
in_channels: Optional[int] = 4, | ||
feat_channels: Optional[tuple] = (64,), | ||
with_distance: Optional[bool] = False, | ||
with_cluster_center: Optional[bool] = True, | ||
with_voxel_center: Optional[bool] = True, | ||
voxel_size: Optional[Tuple[float]] = (0.2, 0.2, 4), | ||
point_cloud_range: Optional[Tuple[float]] = (0, -40, -3, 70.4, | ||
40, 1), | ||
norm_cfg: Optional[dict] = dict( | ||
type='BN1d', eps=1e-3, momentum=0.01), | ||
mode: Optional[str] = 'max', | ||
legacy: Optional[bool] = True, | ||
use_voxel_center_z: Optional[bool] = True, ): | ||
super(PillarFeatureNetAutoware, self).__init__() | ||
assert len(feat_channels) > 0 | ||
self.legacy = legacy | ||
self.use_voxel_center_z = use_voxel_center_z | ||
if with_cluster_center: | ||
in_channels += 3 | ||
if with_voxel_center: | ||
in_channels += 2 | ||
if self.use_voxel_center_z: | ||
in_channels += 1 | ||
if with_distance: | ||
in_channels += 1 | ||
self._with_distance = with_distance | ||
self._with_cluster_center = with_cluster_center | ||
self._with_voxel_center = with_voxel_center | ||
# Create PillarFeatureNet layers | ||
self.in_channels = in_channels | ||
feat_channels = [in_channels] + list(feat_channels) | ||
pfn_layers = [] | ||
for i in range(len(feat_channels) - 1): | ||
in_filters = feat_channels[i] | ||
out_filters = feat_channels[i + 1] | ||
if i < len(feat_channels) - 2: | ||
last_layer = False | ||
else: | ||
last_layer = True | ||
pfn_layers.append( | ||
PFNLayer( | ||
in_filters, | ||
out_filters, | ||
norm_cfg=norm_cfg, | ||
last_layer=last_layer, | ||
mode=mode)) | ||
self.pfn_layers = nn.ModuleList(pfn_layers) | ||
|
||
# Need pillar (voxel) size and x/y offset in order to calculate offset | ||
self.vx = voxel_size[0] | ||
self.vy = voxel_size[1] | ||
self.vz = voxel_size[2] | ||
self.x_offset = self.vx / 2 + point_cloud_range[0] | ||
self.y_offset = self.vy / 2 + point_cloud_range[1] | ||
self.z_offset = self.vz / 2 + point_cloud_range[2] | ||
self.point_cloud_range = point_cloud_range | ||
|
||
def forward(self, features: Tensor, num_points: Tensor, coors: Tensor, | ||
*args, **kwargs) -> Tensor: | ||
"""Forward function. | ||
|
||
Args: | ||
features (torch.Tensor): Point features or raw points in shape | ||
(N, M, C). | ||
num_points (torch.Tensor): Number of points in each pillar. | ||
coors (torch.Tensor): Coordinates of each voxel. | ||
|
||
Returns: | ||
torch.Tensor: Features of pillars. | ||
""" | ||
features_ls = [features] | ||
# Find distance of x, y, and z from cluster center | ||
if self._with_cluster_center: | ||
points_mean = features[:, :, :3].sum( | ||
dim=1, keepdim=True) / num_points.type_as(features).view( | ||
-1, 1, 1) | ||
f_cluster = features[:, :, :3] - points_mean | ||
features_ls.append(f_cluster) | ||
|
||
# Find distance of x, y, and z from pillar center | ||
dtype = features.dtype | ||
if self._with_voxel_center: | ||
center_feature_size = 3 if self.use_voxel_center_z else 2 | ||
if not self.legacy: | ||
f_center = torch.zeros_like(features[:, :, :center_feature_size]) | ||
f_center[:, :, 0] = features[:, :, 0] - ( | ||
coors[:, 3].to(dtype).unsqueeze(1) * self.vx + | ||
self.x_offset) | ||
f_center[:, :, 1] = features[:, :, 1] - ( | ||
coors[:, 2].to(dtype).unsqueeze(1) * self.vy + | ||
self.y_offset) | ||
if self.use_voxel_center_z: | ||
f_center[:, :, 2] = features[:, :, 2] - ( | ||
coors[:, 1].to(dtype).unsqueeze(1) * self.vz + | ||
self.z_offset) | ||
else: | ||
f_center = features[:, :, :center_feature_size] | ||
f_center[:, :, 0] = f_center[:, :, 0] - ( | ||
coors[:, 3].type_as(features).unsqueeze(1) * self.vx + | ||
self.x_offset) | ||
f_center[:, :, 1] = f_center[:, :, 1] - ( | ||
coors[:, 2].type_as(features).unsqueeze(1) * self.vy + | ||
self.y_offset) | ||
if self.use_voxel_center_z: | ||
f_center[:, :, 2] = f_center[:, :, 2] - ( | ||
coors[:, 1].type_as(features).unsqueeze(1) * self.vz + | ||
self.z_offset) | ||
features_ls.append(f_center) | ||
|
||
if self._with_distance: | ||
points_dist = torch.norm(features[:, :, :3], 2, 2, keepdim=True) | ||
features_ls.append(points_dist) | ||
|
||
# Combine together feature decorations | ||
features = torch.cat(features_ls, dim=-1) | ||
# The feature decorations were calculated without regard to whether | ||
# pillar was empty. Need to ensure that | ||
# empty pillars remain set to zeros. | ||
voxel_count = features.shape[1] | ||
mask = get_paddings_indicator(num_points, voxel_count, axis=0) | ||
mask = torch.unsqueeze(mask, -1).type_as(features) | ||
features *= mask | ||
|
||
for pfn in self.pfn_layers: | ||
features = pfn(features, num_points) | ||
|
||
return features.squeeze(1) |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kaancolak
This function is very long, and it is really difficult for reviewer to check. So could you cover this function by test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this function is hard copied from the original repository, it just contains a small adjustment to make it compatible with Autoware's centerpoint TensorRT implementation. Autoware's implementation doesn't contain voxel center z as an encoder input but the original mmdetection3d implementation contains it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kaancolak
Thank you for the quick response! This repository is open-source and will receive PRs from various contributors. To ease the burden on reviewers, adding tests seems to be the simplest solution for the future. What do you think? It's also fine to include tests that are already present in the original code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Shin-kyoto -san, thank you.
IMO, we can add tests for customized operations like "autoware_voxel_encoder", and "T4Dataset" operations, etc. , original parts of the implementations have already been checked with the test under mmdet main repository.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kaancolak
I also think that we can add tests for customized operations.
PillarFeatureNetAutoware
is also customized class and test will be really helpful when we review the PR.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Shin-kyoto -san, I added tests for the T4 dataset and autoware_voxel_encoder, PillarFeatureNetAutoware base class of autoware_voxel_encoder.
You can basically run with
cd mmdetection3d/ && pytest -s projects/AutowareCenterPoint/tests/
Relevant files:
https://github.com/autowarefoundation/mmdetection3d/pull/1/files#diff-0b5c1ac645064857b8ac247c5d193291b74c6130bfdfa4c22aed90d2bd0df58a