Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When to release the code? #1

Open
sorrowyn opened this issue Jul 21, 2023 · 18 comments
Open

When to release the code? #1

sorrowyn opened this issue Jul 21, 2023 · 18 comments

Comments

@sorrowyn
Copy link

sorrowyn commented Jul 21, 2023

Congratulations, this work has been accepted by ICCV2023.
When to release the code?

@attn4det
Copy link
Contributor

Thanks for your interest in our Group DETR. We plan to release the code and models before October.

As Group DETR is simple, there are already some implementations such as in the detrex codebase and the ConditionalDETR codebase.

Feel free to let us know if you have further questions.

@sorrowyn
Copy link
Author

I think the method you propose is simple and effective.
Thank you for getting back to me.

@rayleizhu
Copy link

rayleizhu commented Jul 27, 2023

I tried to reproduce GroupDETR results with detrex, and below is what I got, which shows significantly lower results than the paper report (Table 1). The experiment log can be found here.

image

What's the problem here? How can I reproduce the results reported in the paper?

Related issue: IDEA-Research/detrex#287 (comment)

@attn4det
Copy link
Contributor

attn4det commented Aug 1, 2023

Hi @rayleizhu ,

Thanks for your interest in Group DETR.

Currently, you can try the implementation in the ConditionalDETR repo. It could give similar results as the ones reported in the paper.

@HITerStudy
Copy link

@attn4det how to combine the group_detr with DINO, how to process the dn and mix-selection part? Can you tell some details, thanks!

@attn4det
Copy link
Contributor

@attn4det how to combine the group_detr with DINO, how to process the dn and mix-selection part? Can you tell some details, thanks!

Sorry for the late reply. To make the object queries in multiple groups similar to each other, we construct multiple pairs of classification and regression prediction heads in the first stage, each pair of which provides initialization for the object queries in the corresponding group. Moreover, we also add the dn part in each group as done in DINO.

@JunrQ
Copy link

JunrQ commented Oct 12, 2023

It's already October, are you still planning to release the code?

@Mollylulu
Copy link

Thanks for your work. When can we expect the code release? It was mentioned to be released before last month, but it's still pending. Thanks!

@flyingbird9
Copy link

请问何时发布在dino上实现的代码

@flyingbird9
Copy link

如何将group_detr与DINO相结合,如何处理损失函数去噪部分和查询部分?你能说一些细节吗,谢谢!

@rayleizhu
Copy link

rayleizhu commented Dec 9, 2023 via email

@dt-3t
Copy link

dt-3t commented Mar 5, 2024

@attn4det how to combine the group_detr with DINO, how to process the dn and mix-selection part? Can you tell some details, thanks!

Sorry for the late reply. To make the object queries in multiple groups similar to each other, we construct multiple pairs of classification and regression prediction heads in the first stage, each pair of which provides initialization for the object queries in the corresponding group. Moreover, we also add the dn part in each group as done in DINO.

If only one prediction and regression head is used without multiple groups (as in the original DINO), what would happen?

@Rbrq03
Copy link

Rbrq03 commented Sep 19, 2024

我后来思考了一下,detrex中当时的实现把groupdetr复杂化了。实际上实现起来相当简单,只需要在gt加噪声之前,将feature map和gt同时expand/repeat,然后reshape到batch size维度就可以了,大概几行代码就可以搞定。 核心流程大致如下: feat_map = encoder(input_images) feat_map = feat_map.unsqueeze(1).expand(-1, num_groups, -1, -1, -1).flatten(0, 1) target = target.unsqueeze(1).expand(-1, num_groups, -1) .flatten(0, 1) dndetr_steps() # add noise, forward, matching, etc.

很赞的想法, 但我们是不是还需要加入对attn mask的一些操作

@KevenGe
Copy link

KevenGe commented Dec 18, 2024

请问什么时候可以发布Group DETR的代码呢?

@sorrowyn
Copy link
Author

请问什么时候可以发布Group DETR的代码呢?

猜测不会发布了,你可以先看看这个项目:
https://github.com/IDEA-Research/detrex/blob/main/projects/group_detr/modeling/group_detr_transformer.py

@KevenGe
Copy link

KevenGe commented Dec 19, 2024

请问什么时候可以发布Group DETR的代码呢?

猜测不会发布了,你可以先看看这个项目: https://github.com/IDEA-Research/detrex/blob/main/projects/group_detr/modeling/group_detr_transformer.py

非常感谢您的帮助!

@Anchor1566
Copy link

我后来思考了一下,detrex中当时的实现把groupdetr复杂化了。实际上实现起来相当简单,只需要在gt加噪声之前,将feature map和gt同时expand/repeat,然后reshape到batch size维度就可以了,大概几行代码就可以搞定。 核心流程大致如下: feat_map = encoder(input_images) feat_map = feat_map.unsqueeze(1).expand(-1, num_groups, -1, -1, -1).flatten(0, 1) target = target.unsqueeze(1).expand(-1, num_groups, -1) .flatten(0, 1) dndetr_steps() # add noise, forward, matching, etc.

你好,关于DINO Group实现里,我始终无法给出较好的实现,会下降4AP,由于model部分的实现很简洁,也有参考的示例,但暂时没有看见DINO group的项目,所以我猜测问题是在criterion里,假设num_groups是4,总的query是(300+200dn)x4,存在两个朴素的想法是,1. 200dnx4也进行O2M,将cdn_matched_indicesdn_positive_idx[i]gt_idx也扩展num_groups,即indices有num_groups组,target保持不变,O2M,进行loss计算,但这样会带来显著的下降,下降20多个点
2.分组进行计算loss,这个会带来4AP左右的下降
期待你的回复,祝有更多美好的事情发生在你们的科研和生活中

for i in range(self.num_groups):
                indices = self.get_cdn_matched_indices(outputs['dn_meta'], targets)
                dn_num_boxes = (num_boxes // self.num_groups) * outputs['dn_meta']['dn_num_group']
                for i, aux_outputs in enumerate(outputs['dn_aux_outputs']):
                    aux_outputs_list = self._split_outputs(aux_outputs, outputs['dn_meta']['dn_num_split'][0], self.num_groups)
                    for j in range(self.num_groups):
                        for loss in self.losses:
                            meta = self.get_loss_meta_info(loss, aux_outputs_list[j], targets, indices)
                            l_dict = self.get_loss(loss, aux_outputs, targets, indices, dn_num_boxes, **meta)
                            l_dict = {k: l_dict[k] * self.weight_dict[k] for k in l_dict if k in self.weight_dict}
                            l_dict = {k + f'_dn_{i}': v for k, v in l_dict.items()}
                            losses.update(l_dict)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests