-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathpublication.html
522 lines (462 loc) · 123 KB
/
publication.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
<!DOCTYPE html>
<html lang="en">
<head>
<title>VITA</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link href="https://fonts.googleapis.com/css?family=B612+Mono|Cabin:400,700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="fonts/icomoon/style.css">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"
integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
<link rel="stylesheet" href="css/jquery-ui.css">
<link rel="stylesheet" href="css/owl.carousel.min.css">
<link rel="stylesheet" href="css/owl.theme.default.min.css">
<link rel="stylesheet" href="css/owl.theme.default.min.css">
<link rel="stylesheet" href="css/jquery.fancybox.min.css">
<link rel="stylesheet" href="fonts/flaticon/font/flaticon.css">
<link rel="stylesheet" href="css/aos.css">
<link href="css/jquery.mb.YTPlayer.min.css" media="all" rel="stylesheet" type="text/css">
<link rel="stylesheet" href="css/style.css">
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body data-spy="scroll" data-target=".site-navbar-target" data-offset="300">
<div class="site-wrap">
<div class="site-mobile-menu site-navbar-target">
<div class="site-mobile-menu-header">
<div class="site-mobile-menu-close mt-3">
<span class="icon-close2 js-menu-toggle"></span>
</div>
</div>
<div class="site-mobile-menu-body"></div>
</div>
<div class="header-top">
<div class="container" style="padding:20px">
<div class="row align-items-center">
<!-- <div class="col-12 col-lg-6 d-flex"> -->
<img src="./logo.png" width="15%">
<a class="ml-auto site-logo">
  <b style="color: rgb(71, 71, 71)">V</b>isual <b style="color: rgb(71, 71, 71)">I</b>nformatics Group @ University of <b style="color: rgb(71, 71, 71)">T</b>exas at <b style="color: rgb(71, 71, 71)">A</b>ustin
</a>
<a href="#"
class="ml-auto d-inline-block d-lg-none site-menu-toggle js-menu-toggle text-black"><span
class="icon-menu h3"></span></a>
<!-- </div> -->
<!-- <div class="col-12 col-lg-6 ml-auto d-flex">
<div class="ml-md-auto top-social d-none d-lg-inline-block">
<a href="#" class="d-inline-block p-3"> </a>
<a href="#" class="d-inline-block p-3"> </a>
<a href="#" class="d-inline-block p-3"> </a>
</div>
</div> -->
<!-- <div class="col-6 d-block d-lg-none text-right">-->
</div>
</div>
</div>
<div class="site-navbar py-2 js-sticky-header site-navbar-target d-none pl-0 d-lg-block" role="banner">
<div class="container" style="padding-right=10%">
<div class="d-flex align-items-right">
<!-- <div class="mr-auto">
<a href="index.html">
<img src="./logo.png" width="10%"/>
  <b>V</b>isual <b>I</b>nformatics Group @ University of <b>T</b>exas at <b>A</b>ustin
</a>
</div> -->
<div class="ml-auto">
<nav class="site-navigation position-relative text-right" role="navigation">
<ul class="site-menu main-menu js-clone-nav mr-auto d-none pl-0 d-lg-block">
<li class="active">
<a href="index.html" class="nav-link text-right">Home</a>
</li>
<li>
<a href="research.html" class="nav-link text-left">PI & Research</a>
</li>
<li>
<a href="publication.html" class="nav-link text-left">Publication</a>
</li>
<li>
<a href="group.html" class="nav-link text-left">Group</a>
</li>
<li>
<a href="resource.html" class="nav-link text-left">Resource</a>
</li>
<li>
<a href="prospective_students.html" class="nav-link text-left">Opening</a>
</li>
<!-- <li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="challenge.html" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
Challenge
</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
<a class="dropdown-item" href="challenge1.html">Tiny Object Detection Challenge</a>
<a class="dropdown-item" href="challenge2.html">Image Restoration for UDC Challenge</a>
</div>
</li>
<li>
<a href="callforpapers.html" class="nav-link text-left">Call for Papers</a>
</li>
<li>
<a href="speakers.html" class="nav-link text-left">Invited Speakers</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="challenge.html" id="navbarDropdown"
role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
Previous
</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
<a class="dropdown-item" href="https://yuqian2.wixsite.com/forlq">RLQ'19</a>
</div>
</li> -->
</ul>
</nav>
</div>
</div>
</div>
</div>
</div>
<div class="site-section">
<div class="container">
<div class="row">
<div class="col-lg-12">
<p>Our group actively publishes in the fields of machine learning, computer vision, and interdisciplinary data science. Below are a list of recent and selected papers. A mark * denotes the author to be a VITA student or Dr. Wang's mentee. An up-to-date full paper list can be found <a href="https://scholar.google.com/citations?user=pxFyKAIAAAAJ&hl=en">here</a>.</p>
<div class="section-title" style="margin-bottom: 30px">
<h2>Journal Paper</h2>
</div>
<div class="trend-entry d-flex">
<div class="trend-contents">
<ul>
<li>E. Oikonomou, A. Vaid, G. Holste*, A. Coppi, R. McNamara, C. Baloescu, H. Krumholz, Z. Wang, D. Apakama, G. Nadkarni, R. Khera<br> <b style="color:rgb(71, 71, 71)">“Artificial intelligence-guided detection of under-recognized cardiomyopathies on point-of-care cardiac ultrasound: a multi-center study”</b><br>Lancet Digital Health, 2024. <a href="https://www.medrxiv.org/content/10.1101/2024.03.10.24304044v2">[Paper]</a> <a href="">[Code]</a></li>
<li>W. Zheng*, S. Sharan*, Z. Fan*, K. Wang*, Y. Xi*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Symbolic Visual Reinforcement Learning: A Scalable Framework with Object-Level Abstraction and Differentiable Expression Search”</b><br>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024. <a href="https://ieeexplore.ieee.org/abstract/document/10694733">[Paper]</a> <a href="https://github.com/VITA-Group/DiffSES">[Code]</a></li>
<li> H. Yang*, Y. Liang, X. Guo, L. Wu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Pruning Before Training May Improve Generalization, Provably”</b><br> Journal of Machine Learning Research (JMLR), 2024. <a href="https://arxiv.org/abs/2301.00335">[Paper]</a> <a href="">[Code]</a></li>
<li> H. Yang*, Z. Jiang*, R. Zhang, Y. Liang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK”</b><br> Journal of Machine Learning Research (JMLR), 2024. <a href="https://www.jmlr.org/papers/v25/23-0831.html">[Paper]</a> <a href="">[Code]</a></li>
<li> D. Xu*, Y. Yuan, M. Mardani, S. Liu, J. Song, Z. Wang, and A. Vahdat<br> <b style="color:rgb(71, 71, 71)">“AGG: Amortized Generative 3D Gaussians for Single Image to 3D”</b><br>Transactions on Machine Learning Research (TMLR), 2024. <a href="https://arxiv.org/abs/2401.04099">[Paper]</a> <a href="https://ir1d.github.io/AGG/">[Code]</a></li>
<li> G. Holste*, M. Lin, R. Zhou, F. Wang, L. Liu, Q. Yan, S. Tassel, K. Kovacs, E. Chew, Z. Lu, Z. Wang, and Y. Peng<br> <b style="color:rgb(71, 71, 71)">“Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling”</b><br>npj Digital Medicine, 2024. <a href="https://www.nature.com/articles/s41746-024-01207-4">[Paper]</a> <a href="">[Code]</a></li>
<li> G. Holste*, Y. Zhou, S. Wang, A. Jaiswal, M. Lin, S. Zhuge, Y. Yang, D. Kim, T. Nguyen-Mau, M. Tran, J. Jeong, W. Park, J. Ryu, F. Hong, A. Verma, Y. Yamagishi, C. Kim, H. Seo, M. Kang, L. Celi, Z. Lu, R. Summers, G. Shih, Z. Wang, and Y. Peng<br> <b style="color:rgb(71, 71, 71)">“Towards Long-tailed, Multi-label Disease Classification from Chest X-ray”</b><br>Medical Image Analysis, 2024. <a href="https://www.sciencedirect.com/science/article/abs/pii/S136184152400149X?CMX_ID=&SIS_ID=&dgcid=STMJ_219742_AUTH_SERV_PA&utm_acid=216299604&utm_campaign=STMJ_219742_AUTH_SERV_PA&utm_in=DM481041&utm_medium=email&utm_source=AC_">[Paper]</a> <a href="https://bionlplab.github.io/2024_MICCAI_CXRLT/">[Code]</a></li>
<li> G. Li, D. Hoang*, K. Bhardwaj, M. Lin, Z. Wang, and R. Marculescu<br> <b style="color:rgb(71, 71, 71)">“Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024. <a href="https://arxiv.org/abs/2307.01998">[Paper]</a> <a href="https://github.com/SLDGroup/survey-zero-shot-nas">[Code]</a></li>
<li> E. Oikonomou, G. Holste*, N. Yuan, A. Coppi, R. McNamara, N. Haynes, A. Vora, E. Velazquez, F. Li, V. Menon, S. Kapadia, T. Gill, G. Nadkarni, H. Krumholz, Z. Wang, D. Ouyang, and R. Khera<br> <b style="color:rgb(71, 71, 71)">“A Multimodality Video-Based AI Biomarker for Aortic Stenosis Development and Progression”</b><br> JAMA Cardiology, 2024. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10557799/">[Paper]</a> [Code]</li>
<li> W. Chen*, X. Gong*, J. Wu*, Y. Wei, H. Shi, Z. Yan, Y. Yang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023. <a href="https://arxiv.org/abs/2108.11939">[Paper]</a> <a href="https://github.com/VITA-Group/TEGNAS">[Code]</a></li>
<li> Z. Jiang*, G. Zheng, Y. Cheng, A. Awadallah, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“CR-MoE: Consistent Routed Mixture-of-Experts for Scaling Contrastive Learning”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://openreview.net/forum?id=qKIvn9xL1R">[Paper]</a> <a href="https://github.com/VITA-Group/CRMoE">[Code]</a></li>
<li> M. Lin, T. Li, Y. Yang, G. Holste*, Y. Ding, S. Tassel, K. Kovacs, G. Shih, Z. Wang, Z. Lu, F. Wang, and Y. Peng<br> <b style="color:rgb(71, 71, 71)">“Improving Model Fairness in Image-based Computer-Aided Diagnosis”</b><br> Nature Communications, 2023. <a href="https://www.nature.com/articles/s41467-023-41974-4">[Paper]</a> <a href="https://zenodo.org/record/8226443">[Code]</a></li>
<li> Q. Wu*, X. Chen*, Y. Jiang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Chasing Better Deep Image Priors between Over- and Under-Parameterization”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://openreview.net/forum?id=EwJJks2cSa">[Paper]</a> <a href="https://github.com/VITA-Group/Chasing-Better-DIPs">[Code]</a></li>
<li> G. Holste*, E. Oikonomou, B. Mortazavi, A. Coppi, K. Faridi, E. Miller, J. Forrest, R. McNamara, L. Ohno-Machado, N. Yuan, A. Gupta, D. Ouyang, H. Krumholz, Z. Wang, and R. Khera<br> <b style="color:rgb(71, 71, 71)">“Severe Aortic Stenosis Detection by Deep Learning Applied to Echocardiography”</b><br>European Heart Journal (EHJ), 2023. <a href="https://academic.oup.com/eurheartj/advance-article/doi/10.1093/eurheartj/ehad456/7248551">[Paper]</a> <a href="https://github.com/CarDS-Yale/echo-severe-AS">[Code]</a></li>
<li> W. Zheng*, H. Yang, J. Cai, P. Wang*, X. Jiang, S. Du, Y. Wang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Integrating the Traffic Science with Representation Learning for City-Wide Network Congestion Prediction”</b><br>Elsevier Information Fusion, 2023. <a href="https://www.sciencedirect.com/science/article/abs/pii/S1566253523001537">[Paper]</a> <a href="https://github.com/VITA-Group/TinT">[Code]</a></li>
<li> W. Zheng*, E. Huang, N. Rao, S. Katariya, Z. Wang, and K. Subbian<br> <b style="color:rgb(71, 71, 71)">“You Only Transfer What You Share: Intersection-Induced Graph Transfer Learning for Link Prediction”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://arxiv.org/abs/2302.14189">[Paper]</a> <a href="https://github.com/amazon-science/gnn-tail-generalization">[Code]</a></li>
<li> X. Yang, Z. Wang, S. Hu, C. Kim, S. Yu, M. Pajic, R. Manohar, Y. Chen, and H. Li<br> <b style="color:rgb(71, 71, 71)">“Neuro-Symbolic Computing: Advancements and Challenges in Hardware-Software Co-Design”</b><br>IEEE Transactions on Circuits and Systems II (TCAS-II), 2023. <a href="https://ieeexplore.ieee.org/document/10327770">[Paper]</a> <a href="">[Code]</a></li>
<li> Z. Li*, T. Chen*, L. Li, B. Li, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Can Pruning Improve Certified Robustness of Neural Networks?”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://openreview.net/forum?id=6IFi2soduD">[Paper]</a> <a href="https://github.com/VITA-Group/CertifiedPruning">[Code]</a></li>
<li>H. Wang*, J. Hong, J. Zhou, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://openreview.net/forum?id=11pGlecTz2">[Paper]</a> <a href="">[Code]</a></li>
<li>P. Narayanan, X. Hu, Z. Wu*, M. Thielke, J. Rogers, A. Harrison, J. D’Agostino, J. Brown, L. Quang, J. Uplinger, H. Kwon, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“A Multi-Purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth”</b><br>IEEE Transactions on Image Processing (TIP), 2023. <a href="https://arxiv.org/abs/2206.06427">[Paper]</a> <a href="https://a2i2-archangel.vision/">[Code]</a></li>
<li>T. Chen*, Z. Zhang*, J. Wu, R. Huang, S. Liu, S. Chang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Can You Win Everything with A Lottery Ticket?”</b><br> Transactions on Machine Learning Research (TMLR), 2022. <a href="https://openreview.net/forum?id=JL6MU9XFzW">[Paper]</a> <a href="https://github.com/VITA-Group/LTH-Pass">[Code]</a></li>
<li>Y. Han*, G. Holste*, Y. Ding, A. Tewfik, Y. Peng, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Radiomics-Guided Global-Local Transformer for Weakly Supervised Pathology Localization in Chest X-Rays”</b><br> IEEE Transactions on Medical Imaging (TMI), 2022. <a href="https://ieeexplore.ieee.org/document/9930800">[Paper]</a> <a href="https://github.com/VITA-Group/CheXT">[Code]</a></li>
<li>T. Chen*, Y. Cheng, Z. Gan, J. Wang, L. Wang, J. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Adversarial Feature Augmentation and Normalization for Visual Recognition”</b><br> Transactions on Machine Learning Research (TMLR), 2022. <a href="https://openreview.net/forum?id=2VEUIq9Yff">[Paper]</a> <a href="https://github.com/VITA-Group/CV_A-FAN">[Code]</a></li>
<li>S. Mohseni, H. Wang*, Z. Yu, C. Xiao, Z. Wang, J. Yadawa<br> <b style="color:rgb(71, 71, 71)">“Taxonomy of Machine Learning Safety: A Survey and Primer”</b><br> ACM Computing Surveys (CSUR), 2022. <a href="https://dl.acm.org/doi/10.1145/3551385">[Paper]</a> </li>
<li>T. Chen*, S. Liu, S. Chang, L. Amini, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning”</b><br> Transactions on Machine Learning Research (TMLR), 2022. (Featured Certification) <a href="https://openreview.net/forum?id=oLvlPJheCD">[Paper]</a> <a href="https://github.com/VITA-Group/CIL-QUD">[Code]</a></li>
<li> (α-β) T. Chen*, X. Chen*, W. Chen*, H. Heaton, J. Liu, Z. Wang, and W. Yin<br> <b style="color:rgb(71, 71, 71)">“Learning to Optimize: A Primer and A Benchmark”</b><br> Journal of Machine Learning Research (JMLR), 2022. <a href="https://jmlr.org/papers/v23/21-0308.html">[Paper]</a> <a href="https://github.com/VITA-Group/Open-L2O">[Code]</a></li>
<li>T. Chen*, K. Zhou, K. Duan, W. Zheng*, P. Wang*, X. Hu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)"> “Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. <a href="https://arxiv.org/abs/2108.10521">[Paper]</a> <a href="https://github.com/VITA-Group/Deep_GCN_Benchmarking">[Code]</a></li>
<li>X. Chen*, Y. Zhao, Y. Wang, P. Xu, H. You, C. Li, Y. Fu, Y. Lin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training”</b><br>IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021. <a href="https://arxiv.org/pdf/2101.01163.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/SmartDeal">[Code]</a></li>
<li>T. Hu*, F. Gama, T. Chen*, W. Zheng*, Z. Wang, A. Ribeiro, and B. Sadler<br> <b style="color:rgb(71, 71, 71)">“Scalable Perception-Action-Communication Loops with Convolutional and Graph Neural Networks”</b><br>IEEE Transactions on Signal and Information Processing over Networks (TSIPN), 2021. <a href="https://arxiv.org/abs/2106.13358">[Paper]</a> <a href="https://github.com/VITA-Group/VGAI">[Code]</a></li>
<li>S. Yang*, Z. Wang, J. Jiu, and Z. Guo<br> <b style="color:rgb(71, 71, 71)">“Controllable Sketch-to-Image Translation for Robust Face Synthesis”</b><br> IEEE Transactions on Image Processing (TIP), 2021. <a href="https://ieeexplore.ieee.org/document/9583954">[Paper]</a> <a href="https://github.com/VITA-Group/DeepPS">[Code]</a></li>
<li>J. Yan, Y. Zhong, Y. Fang, Z. Wang, and K. Ma<br> <b style="color:rgb(71, 71, 71)">“Exposing Semantic Segmentation Failures via Maximum Discrepancy Competition”</b><br> International Journal of Computer Vision (IJCV), 2021. <a href="https://arxiv.org/abs/2103.00259">[Paper]</a> <a href="https://github.com/QTJiebin/MAD_Segmentation">[Code]</a></li>
<li>S. Yang*, Z. Wang, and J. Liu<br> <b style="color:rgb(71, 71, 71)">“Shape-Matching GAN++: Scale Controllable Dynamic Artistic Text Style Transfer”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021. <a href="https://ieeexplore.ieee.org/document/9339900">[Paper]</a></li>
<li>Y. Jiang*, X. Gong*, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“EnlightenGAN: Deep Light Enhancement without Paired Supervision”</b><br> IEEE Transactions on Image Processing (TIP), 2021. (IEEE SPS Young Author Best Paper Award, 2024) <a href="https://arxiv.org/abs/1906.06972">[Paper]</a> <a href="https://github.com/VITA-Group/EnlightenGAN">[Code]</a></li>
<li>Z. Wu*, H. Wang*, Z. Wang, H. Jin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. <a href="https://ieeexplore.ieee.org/abstract/document/9207852">[Paper]</a> <a href="https://github.com/VITA-Group/PA-HMDB51">[Code]</a></li>
<li> M. Karimi, D. Wu, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts”</b><br> Journal of Chemical Information and Modeling (JCIM), 2020. <a href="https://pubs.acs.org/doi/10.1021/acs.jcim.0c00866">[Paper]</a> [Code] </li>
<li>S. Li, W. Ren, F. Wang, I. Araujo*, E. K. Tokuda*, R. Hirata, R. Cesar, Z. Wang, and X. Cao<br> <b style="color:rgb(71, 71, 71)">“A Comprehensive Benchmark Analysis of Single Image Deraining: Current Challenges and Future Perspectives”</b><br> International Journal of Computer Vision (IJCV), 2020. <a href="https://link.springer.com/article/10.1007/s11263-020-01416-w">[Paper]</a> </li>
<li>Y. Yuan*, W. Yang, W. Ren, J Liu, W. J. Scheirer, and Z. Wang, et. al.<br> <b style="color:rgb(71, 71, 71)">“Advancing Image Understanding in Poor Visibility Environments: A Collective Benchmark Study”</b><br> IEEE Transactions on Image Processing (TIP), 2020. <a href="https://ieeexplore.ieee.org/abstract/document/9049390">[Paper]</a></li>
<li>M. Karimi, D. Wu, Z. Wang and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks”</b><br> Oxford Bioinformatics, 2019. <a href="https://academic.oup.com/bioinformatics/article/35/18/3329/5320555">[Paper]</a> <a href="https://github.com/Shen-Lab/DeepAffinity">[Code]</a></li>
<li>R. G. VidalMata, ... Y. Yuan*, J. Wu*, Z. Wang, ... et. al. <br> <b style="color:rgb(71, 71, 71)">“Bridging the Gap Between Computational Photography and Visual Recognition”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. <a href="https://ieeexplore.ieee.org/abstract/document/9097964">[Paper]</a><a href="https://github.com/VITA-Group/TAMU-PKU-UG2">[Code]</a></li>
<li>B. Li*, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Benchmarking Single Image Dehazing and Beyond”</b><br> IEEE Transactions on Image Processing (TIP), vol. 28, no. 1, pp. 492-505, 2019. <a href="https://ieeexplore.ieee.org/abstract/document/8451944">[Paper]</a> <a href="https://sites.google.com/site/boyilics/website-builder/reside">[Project Page]</a></li>
</ul>
</div>
</div>
<div class="section-title" style="margin-bottom: 30px">
<h2>Conference Paper</h2>
</div>
<div class="trend-entry d-flex">
<div class="trend-contents">
<ul>
<li>Z. Li*, T. Chen*, L. Li, B. Li, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Sparse Transfer Learning Accelerates and Enhances Certified Robustness”</b><br>AAAI Conference on Artificial Intelligence (AAAI), 2025. <a href="">[Paper]</a> [Code] </li>
<li>Z. Fan*, K. Wang*, K. Wen, Z. Zhu*, D. Xu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. (Spotlight) <a href="https://arxiv.org/abs/2311.17245">[Paper]</a> <a href="https://lightgaussian.github.io/">[Code]</a>
<li>H. Hu*, Z. Fan*, T. Wu, Y. Xi*, S. Lee*, G. Pavlakos, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Expressive Gaussian Human Avatars from Monocular RGB Video"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2407.03204">[Paper]</a> <a href="https://evahuman.github.io/">[Code]</a>
<li>R. Cai*, Y. Ro, G. Kim, P. Wang*, B. Bejnordi, A. Akella, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://utns.cs.utexas.edu/assets/papers/neurips24-readme.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/READ-ME">[Code]</a>
<li>Z. Zhang*, R. Chen*, S. Liu*, Z. Yao, O. Ruwase, B. Chen, X. Wu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2403.04797">[Paper]</a> <a href="https://github.com/VITA-Group/Ms-PoE">[Code]</a>
<li>Z. Fan*, J. Zhang, W. Cong*, P. Wang*, R. Li, K. Wen, S. Zhou, A Kadambi, Z. Wang, D. Xu, B. Ivanovic, M. Pavone, and Y. Wang<br> <b style="color:rgb(71, 71, 71)">“Large Spatial Model: End-to-end Unposed Images to Semantic 3D”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2410.18956">[Paper]</a> <a href="https://largespatialmodel.github.io/">[Code] </a>
<li>H. Yang*, B. Kailkhura, Z. Wang, and Y. Liang<br> <b style="color:rgb(71, 71, 71)">“Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2410.09605">[Paper]</a> <a href="">[Code] </a>
<li>H. Liang, Y. Yin, D. Xu*, H. Liang*, Z. Wang, K. Plataniotis, Y. Zhao, and Y. Wei<br> <b style="color:rgb(71, 71, 71)">“Diffusion4D: Fast Spatial-temporal Consistent 4D generation via Video Diffusion Models"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2405.16645">[Paper]</a> <a href="https://github.com/VITA-Group/Diffusion4D">[Code] </a>
<li>H. Lu, Y. Zhou, S. Liu*, Z. Wang, M. Mahoney, and Y. Yang<br> <b style="color:rgb(71, 71, 71)">“AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2410.10912">[Paper]</a> <a href="https://github.com/haiquanlu/AlphaPruning">[Code] </a>
<li>X. Zhao, G. Sun, R. Cai*, Y. Zhou, P. Li, P. Wang*, B. Tan, Y. He, L. Chen, Y. Liang, B. Chen, B. Yuan, H. Wang, A. Li, Z. Wang, and T. Chen*<br> <b style="color:rgb(71, 71, 71)">“Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild”</b><br>Advances in Neural Information Processing Systems, Track on Datasets and Benchmarks (NeurIPS D & B), 2024. <a href="https://arxiv.org/pdf/2410.05357">[Paper]</a> <a href="https://github.com/Model-GLUE/Model-GLUE">[Code] </a> </li>
<li>Z. Zhu*, Z. Fan*, Y. Jiang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting”</b><br>European Conference on Computer Vision (ECCV), 2024. <a href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/05583.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/FSGS">[Code] </a> </li>
<li>S. Zhou, Z. Fan*, D. Xu*, H. Chang, P. Chari, T. Bharadwaj, S. You, Z. Wang, and A. Kadambi<br> <b style="color:rgb(71, 71, 71)">“DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting”</b><br>European Conference on Computer Vision (ECCV), 2024. <a href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/00996.pdf">[Paper]</a> <a href="https://dreamscene360.github.io/">[Code] </a> </li>
<li>R. Li, Z. Fan*, B. Wang, P. Wang*, Z. Wang, and X. Wu<br> <b style="color:rgb(71, 71, 71)">“VersatileGaussian: Real-time Neural Rendering for Versatile Tasks using Gaussian Splatting”</b><br>European Conference on Computer Vision (ECCV), 2024. <a href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/03032.pdf">[Paper]</a> <a href="https://versatilegaussian.github.io/">[Code] </a> </li>
<li>Q. Li, J. Hong*, C. Xie, J. Tan, R. Xin, J. Hou, X. Yin, Z. Wang, D. Hendrycks, Z. Wang, B. Li, B. He, and D. Song<br> <b style="color:rgb(71, 71, 71)">“LLM-PBE: Assessing Data Privacy in Large Language Models”</b><br>International Conference on Very Large Data Bases (VLDB), 2024. (Best Paper Finalist) <a href="https://www.vldb.org/pvldb/vol17/p3201-li.pdf">[Paper]</a> <a href="https://llm-pbe.github.io/home">[Code] </a> </li>
<li>L. Sun*, N. Bhatt*, J. Liu*, Z. Fan*, Z. Wang, T. Humphreys, and U. Topcu<br> <b style="color:rgb(71, 71, 71)">“MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements”</b><br>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024. <a href="https://arxiv.org/abs/2404.00923">[Paper]</a> <a href="https://github.com/VITA-Group/MM3DGS-SLAM">[Code] </a> </li>
<li>R. Cai*, S. Muralidharan, G. Heinrich, H. Yin, Z. Wang, J. Kautz, and P. Molchanov<br> <b style="color:rgb(71, 71, 71)">“Flextron: Many-in-One Flexible Large Language Model”</b><br>International Conference on Machine Learning (ICML), 2024. (Oral) <a href="https://openreview.net/pdf?id=9vKRhnflAs">[Paper]</a> <a href="">[Code] </a> </li>
<li>R. Cai*, Y. Tian, Z. Wang, and B. Chen<br> <b style="color:rgb(71, 71, 71)">“LoCoCo: Dropping In Convolutions for Long Context Compression”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2406.05317">[Paper]</a> <a href="https://github.com/VITA-Group/LoCoCo">[Code] </a> </li>
<li>L. Yin*, A. Jaiswal*, S. Liu*, S. Kundu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs Difficult Downstream Tasks in LLMs”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2310.02277">[Paper]</a> <a href="https://github.com/VITA-Group/Junk_DNA_Hypothesis">[Code] </a> </li>
<li>L. Yin*, Y. Wu, Z. Zhang*, C. Hsieh, Y. Wang, Y. Jia, G. Li, A. Jaiswal*, M. Pechenizkiy, Y. Liang, M. Bendersky, Z. Wang, and S. Liu*<br> <b style="color:rgb(71, 71, 71)">“Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2310.05175">[Paper]</a> <a href="https://github.com/luuyin/OWL">[Code] </a> </li>
<li>R. Chen*, T. Zhao, A. Jaiswal*, N. Shah, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“LLaGA: Large Language and Graph Assistant”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.08170">[Paper]</a> <a href="https://github.com/VITA-Group/LLaGA">[Code] </a> </li>
<li>J. Hong*, J. Duan, C. Zhang, Z. Li*, C. Xie, K. Lieberman, J. Diffenderfer, B. Bartoldson, A. Jaiswal*, K. Xu, B. Kailkhura, D. Hendrycks, D. Song, Z. Wang, and B. Li<br> <b style="color:rgb(71, 71, 71)">“Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2403.15447">[Paper]</a> <a href="https://decoding-comp-trust.github.io/">[Code] </a> </li>
<li>Z. Li*, S. Liu*, T. Chen*, A. Jaiswal*, Z. Zhang*, D. Wang, R. Krishnamoorthi, S. Chang, Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://openreview.net/pdf/e34b99064ed4210ff231d4616590494ef817370b.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/SparseCocktail">[Code] </a> </li>
<li>J. Zhao, Z. Zhang*, B. Chen, Z. Wang, A. Anandkumar, and Y. Tian<br> <b style="color:rgb(71, 71, 71)">“GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection”</b><br>International Conference on Machine Learning (ICML), 2024. (Oral) <a href="https://arxiv.org/abs/2403.03507">[Paper]</a> <a href="https://github.com/jiaweizzhao/GaLore">[Code] </a> </li>
<li>H. Dong, X. Yang, Z. Zhang*, Z. Wang, Y. Chi, and B. Chen<br> <b style="color:rgb(71, 71, 71)">“Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.09398">[Paper]</a> <a href="https://github.com/hdong920/LESS">[Code] </a> </li>
<li>Y. Zhang, P. Li, J. Hong*, J. Li, Y. Zhang, W. Zheng*, P. Chen, J. Lee, W. Yin, M. Hong, Z. Wang, S. Liu, and T. Chen*<br> <b style="color:rgb(71, 71, 71)">“Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.11592">[Paper]</a> <a href="https://github.com/ZO-Bench/ZO-LLM">[Code] </a> </li>
<li>P. Wang*, D. Xu*, Z. Fan*, D. Wang, S. Mohan, F. Iandola, R. Ranjan, Y. Li, Q. liu, Z. Wang, and V. Chandra<br> <b style="color:rgb(71, 71, 71)">"Taming Mode Collapse in Score Distillation for Text-to-3D Generation”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://arxiv.org/abs/2401.00909">[Paper]</a> <a href="https://vita-group.github.io/3D-Mode-Collapse/">[Code] </a> </li>
<li>M. Varma, P. Wang*, Z. Fan*, Z. Wang, H. Su, and R. Ramamoorthi<br> <b style="color:rgb(71, 71, 71)">"Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://arxiv.org/abs/2403.18922">[Paper]</a> <a href="https://mukundvarmat.github.io/Lift3D/">[Code] </a> </li>
<li>S. Zhou, H. Chang, S. Jiang, Z. Fan*, Z. Zhu*, D. Xu*, P. Chari, S. You, Z. Wang, and A. Kadambi<br> <b style="color:rgb(71, 71, 71)">"Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. (Highlight) <a href="https://arxiv.org/abs/2312.03203">[Paper]</a> <a href="https://feature-3dgs.github.io/">[Code] </a> </li>
<li>V. Goel, E. Peruzzo, Y. Jiang*, D. Xu*, X. Xu, N. Sebe, T. Darrell, Z. Wang, H. Shi<br> <b style="color:rgb(71, 71, 71)">"PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://arxiv.org/abs/2303.17546">[Paper]</a> <a href="https://vidit98.github.io/publication/conference-paper/pair_diff.html">[Code] </a> </li>
<li>M. Ohanyan, H. Manukyan, Z. Wang, S. Navasardyan, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://openaccess.thecvf.com/content/CVPR2024/papers/Ohanyan_Zero-Painter_Training-Free_Layout_Control_for_Text-to-Image_Synthesis_CVPR_2024_paper.pdf">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/Zero-Painter">[Code] </a> </li>
<li>X. Xu, J. Guo, Z. Wang, G. Huang, I. Essa, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Prompt-Free Diffusion: Taking 'Text' out of Text-to-Image Diffusion Models”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://arxiv.org/abs/2305.16223">[Paper]</a> <a href="https://github.com/SHI-Labs/Prompt-Free-Diffusion">[Code] </a> </li>
<li>M. D'Incà, E. Peruzzo, M. Mancini, D. Xu*, V. Goel, X. Xu, Z. Wang, H. Shi, and N. Sebe<br> <b style="color:rgb(71, 71, 71)">"OpenBias: Open-set Bias Detection in Generative Models”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. (Highlight) <a href="https://arxiv.org/abs/2404.07990">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/OpenBias">[Code] </a> </li>
<li>Z. Zhang*, S. Liu*, R. Chen*, B. Kailkhura, B. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache”</b><br>Conference on Machine Learning and Systems (MLSys), 2024. <a href="https://proceedings.mlsys.org/paper_files/paper/2024/file/bbb7506579431a85861a05fff048d3e1-Paper-Conference.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Q-Hitter">[Code] </a> </li>
<li>Y. Yang, N. Bhatt*, T. Ingebrand, W. Ward, S. Carr, Z. Wang, and U. Topcu<br> <b style="color:rgb(71, 71, 71)">"Fine-Tuning Language Models Using Formal Methods Feedback”</b><br>Conference on Machine Learning and Systems (MLSys), 2024. <a href="https://proceedings.mlsys.org/paper_files/paper/2024/file/b0131b6ee02a00b03fc3320176fec8f5-Paper-Conference.pdf">[Paper]</a> <a href="">[Code] </a> </li>
<li>A. Jaiswal*, Z. Gan, X. Du, B. Zhang, Z. Wang, and Y. Yang<br> <b style="color:rgb(71, 71, 71)">"Compressing LLMs: The Truth is Rarely Pure and Never Simple”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=B9klVS7Ddk">[Paper]</a> <a href="https://github.com/VITA-Group/llm-kick">[Code] </a> </li>
<li>J. Hong*, J. Wang, C. Zhang, Z. LI*, B. Li, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"DP-OPT: Make Large Language Model Your Differentially-Private Prompt Engineer”</b><br>International Conference on Learning Representations (ICLR), 2024. (Spotlight) <a href="https://openreview.net/forum?id=Ifz3IgsEPX">[Paper]</a> <a href="https://github.com/VITA-Group/DP-OPT">[Code] </a> </li>
<li>Y. Jiang*, H. Tang, J. Chang, L. Song, Z. Wang, and L. Cao<br> <b style="color:rgb(71, 71, 71)">"Efficient-3DiM: Learning a Generalizable Single-image Novel-view Synthesizer in One Day”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=3eFMnZ3N4J">[Paper]</a> <a href="">[Code] </a> </li>
<li>W. Chen*, J. Wu*, Z. Wang, and B. Hanin<br> <b style="color:rgb(71, 71, 71)">"Principled Architecture-aware Scaling of Hyperparameters”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=HZndRcfyNI">[Paper]</a> <a href="https://github.com/VITA-Group/principled_scaling_lr_init">[Code] </a> </li>
<li>P. Wang*, S. Yang, S. Li, Z. Wang, and P. Li<br> <b style="color:rgb(71, 71, 71)">"Polynomial Width is Sufficient for Set Representation with High-dimensional Features”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=34STseLBrQ">[Paper]</a> <a href="">[Code] </a> </li>
<li>X. Chen*, Y. Yang, Z. Wang, and B. Mirzasoleiman<br> <b style="color:rgb(71, 71, 71)">"Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=1NHgmKqOzZ">[Paper]</a> <a href="https://github.com/VITA-Group/ProgressiveDD">[Code] </a> </li>
<li>Y. You*, R. Zhou, J. Park, H. Xu, C. Tian, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">"Latent 3D Graph Diffusion”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=cXbnGtO0NZ">[Paper]</a> <a href="https://github.com/Shen-Lab/LDM-3DG">[Code] </a> </li>
<li>A. Isajanyan, A. Shatveryan, D. Kocharian, Z. Wang, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community”</b><br>International Conference on Learning Representations (ICLR), 2024. (Spotlight) <a href="https://openreview.net/forum?id=tjn2YZSHUv">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/Social-Reward">[Code] </a> </li>
<li>S. Yu, J. Hong*, H. Zhang, H. Wang*, Z. Wang, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">"Safe and Robust Watermark Injection with a Single OoD Image”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=PCm1oT8pZI">[Paper]</a> <a href="https://github.com/illidanlab/Single_oodwatermark">[Code] </a> </li>
<li>D. Sow, S. Lin, Z. Wang, and Y. Liang<br> <b style="color:rgb(71, 71, 71)">"Doubly Robust Instance-Reweighted Adversarial Training”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=OF5x1dzWSS">[Paper]</a> <a href="">[Code] </a> </li>
<li>A. Jaiswal*, S. Liu*, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. <a href="https://arxiv.org/abs/2306.03805">[Paper]</a> <a href="https://github.com/VITA-Group/essential_sparsity">[Code] </a> </li>
<li>Z. Zhang*, Y. Sheng, T. Zhou, T. Chen*, L. Zheng, R. Cai*, Z. Song, Y. Tian, C. Ré, C. Barrett, Z. Wang, and B. Chen<br> <b style="color:rgb(71, 71, 71)">"H<sub>2</sub>O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. <a href="https://arxiv.org/abs//2306.14048">[Paper]</a> <a href="https://github.com/FMInference/H2O">[Code] </a> </li>
<li>D. Hoang*, S. Kundu, S. Liu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Don’t Just Prune by Magnitude! Your Mask Topology is A Secret Weapon”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. <a href="https://openreview.net/forum?id=DIBcdjWV7k">[Paper]</a> <a href="https://github.com/VITA-Group/FullSpectrum-PAI">[Code] </a> </li>
<li>H. Wang*, Z. Jiang*, Y. You*, Y. Han*, G. Liu, J. Srinivasa, R. Kompella, Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. <a href="https://arxiv.org/abs/2304.02806">[Paper]</a> <a href="https://github.com/VITA-Group/Graph-Mixture-of-Experts">[Code] </a> </li>
<li>Z. Wang, Y. Jiang*, Y. Lu, Y. Shen, P. He, W. Chen, Z. Wang, M. Zhou<br> <b style="color:rgb(71, 71, 71)">"In-Context Learning Unlocked for Diffusion Models”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. (Spotlight) <a href="https://arxiv.org/abs/2305.01115">[Paper]</a> <a href="https://github.com/Zhendong-Wang/Prompt-Diffusion">[Code] </a> </li>
<li>Z. Wang, Y. Jiang*, H. Zheng, P. Wang*, P. He, Z. Wang, W. Chen, M. Zhou<br> <b style="color:rgb(71, 71, 71)">"Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. <a href="https://arxiv.org/abs/2304.12526">[Paper]</a> <a href="https://github.com/Zhendong-Wang/Patch-Diffusion">[Code] </a> </li>
<li>L. Yin*, G. Li, M. Fang, L. Shen, T. Huang, Z. Wang, V. Menkovski, X. Ma, M. Pechenizkiy, and S. Liu*<br> <b style="color:rgb(71, 71, 71)">"Dynamic Sparsity Is Channel-Level Sparsity Learner”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2023. <a href="https://arxiv.org/abs/2305.19454">[Paper]</a> <a href="https://github.com/luuyin/chase">[Code] </a> </li>
<li>W. Cong*, H. Liang*, P. Wang*, Z. Fan*, T. Chen*, M. Varma*, Y. Wang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Cong_Enhancing_NeRF_akin_to_Enhancing_LLMs_Generalizable_NeRF_Transformer_with_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/GNT-MOVE">[Code] </a> </li>
<li>A. Jaiswal*, X. Zhang, S. Chan, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Physics-Driven Turbulence Image Restoration with Stochastic Refinement”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Jaiswal_Physics-Driven_Turbulence_Image_Restoration_with_Stochastic_Refinement_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/PiRN">[Code] </a> </li>
<li>Y. Han*, P. Wang*, S. Kundu, Y. Ding, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Vision HGNN: An Image is More than a Graph of Nodes”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. (Oral) <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Han_Vision_HGNN_An_Image_is_More_than_a_Graph_of_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/ViHGNN">[Code] </a> </li>
<li>T. Chen*, X. Chen*, X. Du, A. Rashwan, F. Yang, H. Chen, Z. Wang, and Y. Li<br> <b style="color:rgb(71, 71, 71)">"AdaMV-MoE: Adaptive Multi-Task Vision Mixture-of-Experts”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_AdaMV-MoE_Adaptive_Multi-Task_Vision_Mixture-of-Experts_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/google-research/google-research/tree/master/moe_mtl">[Code] </a> </li>
<li>C. Li, B. Feng, Z. Fan*, P. Pan, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"StegaNeRF: Embedding Invisible Information within Neural Radiance Fields”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Li_StegaNeRF_Embedding_Invisible_Information_within_Neural_Radiance_Fields_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/XGGNet/StegaNeRF">[Code] </a> </li>
<li>X. Xu, Z. Wang, G. Zhang, K. Wang, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Versatile Diffusion: Text, Images and Variations All in One Diffusion Model”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Xu_Versatile_Diffusion_Text_Images_and_Variations_All_in_One_Diffusion_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/SHI-Labs/Versatile-Diffusion">[Code] </a> </li>
<li>Y. Zhang, R. Cai*, T. Chen*, G. Zhang, H. Zhang, P. Chen, S. Chang, Z. Wang, and S. Liu<br> <b style="color:rgb(71, 71, 71)">"Robust Mixture-of-Expert Training for Convolutional Neural Networks”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. (Oral) <a href="https://openaccess.thecvf.com/content/ICCV2023/html/Zhang_Robust_Mixture-of-Expert_Training_for_Convolutional_Neural_Networks_ICCV_2023_paper.html">[Paper]</a> <a href="https://github.com/OPTML-Group/Robust-MoE-CNN">[Code] </a> </li>
<li>L. Khachatryan, A. Movsisyan, V. Tadevosyan, R. Henschel, Z. Wang, S. Navasardyan, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators”</b><br> IEEE International Conference on Computer Vision (ICCV), 2023. (Oral) <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Khachatryan_Text2Video-Zero_Text-to-Image_Diffusion_Models_are_Zero-Shot_Video_Generators_ICCV_2023_paper.pdf">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/Text2Video-Zero">[Code] </a> </li>
<li>G. Holste*, Z. Jiang*, A. Jaiswal*, M. Hanna, S. Minkowitz, A. Legasto, J. Escalon, S. Steinberger, M. Bittman, T. Shen, Y. Ding, R. Summers, G. Shih, Y. Peng, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“How Does Pruning Impact Long-Tailed Multi-Label Medical Image Classifiers?”</b><br>Medical Image Computing and Computer Assisted Interventions (MICCAI), 2023. <a href="https://arxiv.org/abs/2308.09180">[Paper]</a> <a href="https://github.com/VITA-Group/PruneCXR">[Code] </a></li>
<li>W. Chen*, W. Huang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“No Free Lunch in Neural Architectures? A Joint Analysis of Expressivity, Convergence, and Generalization”</b><br>International Conference on Automated Machine Learning (AutoML-Conf), 2023. <a href="https://openreview.net/pdf?id=EMys3eIDJ2">[Paper]</a> <a href="https://github.com/chenwydj/no_free_lunch_architectures"> [Code]</a></li>
<li>X. Chen*, T. Chen*, W. Chen, A. Awadallah, Z. Wang, and Y. Cheng<br> <b style="color:rgb(71, 71, 71)">“DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models”</b><br> Annual Meeting of the Association for Computational Linguistics (ACL), 2023. (Long) <a href="https://arxiv.org/abs/2111.00160">[Paper]</a><a href="https://github.com/VITA-Group/DSEE">[Code]</a> </li>
<li>A. Jaiswal*, S. Liu*, T. Chen*, Y. Ding, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models”</b><br>International Conference on Machine Learning (ICML), 2023. (Oral) <a href="https://arxiv.org/abs/2306.10460">[Paper]</a> <a href="https://github.com/VITA-Group/instant_soup">[Code] </a> </li>
<li>A. Jaiswal*, S. Liu*, T. Chen*, Y. Ding, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://arxiv.org/abs/2306.10466">[Paper]</a> <a href="https://github.com/VITA-Group/graph_ladling">[Code] </a> </li>
<li>R. Cai*, Z. Zhang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://proceedings.mlr.press/v202/cai23f.html">[Paper]</a> <a href="https://github.com/VITA-Group/Robust_Weight_Signatures">[Code] </a> </li>
<li>P. Wang*, R. Panda, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Data Efficient Neural Scaling Law via Model Reusing”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://proceedings.mlr.press/v202/wang23aa.html">[Paper]</a> <a href="https://github.com/VITA-Group/Data-Efficient-Scaling">[Code] </a> </li>
<li>W. Zheng*, S. Sharan*, A. Jaiswal*, K. Wang*, Y. Xi*, D. Xu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://arxiv.org/abs/2305.00909">[Paper]</a> <a href="https://github.com/VITA-Group/ChainCoder">[Code] </a> </li>
<li>X. Chen*, N. Vadori, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Learning to Optimize Differential Games”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://proceedings.mlr.press/v202/chen23ab.html">[Paper]</a> <a href="https://github.com/VITA-Group/L2PG">[Code] </a> </li>
<li>T. Huang, L. Yin*, Z. Zhang*, L. Shen, M. Fang, M. Pechenizkiy, Z. Wang, and S. Liu*<br> <b style="color:rgb(71, 71, 71)">“Are Large Kernels Better Teachers than Transformers for ConvNets?”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://arxiv.org/abs/2305.19412">[Paper]</a> <a href="https://github.com/VITA-Group/SLaK">[Code] </a> </li>
<li>Y. Ro, Z. Wang, V. Chidambaram, and A. Akella<br> <b style="color:rgb(71, 71, 71)">“Lowering the Pre-training Tax for Gradient-based Subset Training: A Lightweight Distributed Pre-Training Toolkit”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://proceedings.mlr.press/v202/ro23a.html">[Paper]</a> <a href="https://github.com/moonbucks/LiPT">[Code] </a> </li>
<li>J. Liu, X. Chen*, Z. Wang, W. Yin, and H. Cai<br> <b style="color:rgb(71, 71, 71)">“Towards Constituting Mathematical Structures for Learning to Optimize”</b><br>International Conference on Machine Learning (ICML), 2023. <a href="https://arxiv.org/abs/2305.18577">[Paper]</a> <a href="https://github.com/xhchrn/MS4L2O">[Code] </a> </li>
<li>D. Xu*, Y. Jiang*, P. Wang*, Z. Fan*, Y. Wang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360◦ Views”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. (Highlight) <a href="https://arxiv.org/abs/2211.16431">[Paper]</a> <a href="https://vita-group.github.io/NeuralLift-360/">[Code] </a> </li>
<li>Y. Jiang*, P. Hedman, B. Mildenhall, D. Xu*, J. Barron, Z. Wang, and T. Xue<br> <b style="color:rgb(71, 71, 71)">"AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. <a href="https://arxiv.org/abs/2211.09682">[Paper]</a> <a href="https://yifanjiang19.github.io/alignerf">[Code] </a> </li>
<li>X. Gong*, S. Mohan, N. Dhingra, J. Bazin, Y. Li, Z. Wang, and R. Ranjan<br> <b style="color:rgb(71, 71, 71)">"MMG-Ego4D: Multimodal Generalization in Egocentric Action Recognition”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. <a href="https://arxiv.org/abs/2305.07214">[Paper]</a> <a href="https://github.com/facebookresearch/MMG_Ego4D">[Code] </a></li>
<li> H. Lu*, H. Tunanyan, K. Wang, S. Navasardyan, Z. Wang, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Specialist Diffusion: Plug-and-Play Sample-Efficient Fine-Tuning of Text-to-Image Diffusion Models to Learn Any Unseen Style”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. <a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Lu_Specialist_Diffusion_Plug-and-Play_Sample-Efficient_Fine-Tuning_of_Text-to-Image_Diffusion_Models_To_CVPR_2023_paper.pdf">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/Specialist-Diffusion">[Code] </a></li>
<li>D. Hoang*, S. Liu*, R. Marculescu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph”</b><br>International Conference on Learning Representations (ICLR), 2023. (Oral) <a href="https://openreview.net/forum?id=uVcDssQff_">[Paper]</a> <a href="https://github.com/VITA-Group/ramanujan-on-pai">[Code] </a> </li>
<li>S. Liu*, T. Chen*, Z. Zhang*, X. Chen*, T. Huang, A. Jaiswal*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!”</b><br>International Conference on Learning Representations (ICLR), 2023. (Spotlight) <a href="https://openreview.net/forum?id=J6F3lLg4Kdp">[Paper]</a> <a href="https://github.com/VITA-Group/SMC-Bench">[Code] </a> </li>
<li>T. Chen*, Z. Zhang*, A. Jaiswal*, S. Liu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers”</b><br>International Conference on Learning Representations (ICLR), 2023. (Spotlight) <a href="https://openreview.net/forum?id=w1hwFUb_81">[Paper]</a> <a href="https://github.com/VITA-Group/Random-MoE-as-Dropout">[Code] </a> </li>
<li>P. Wang*, R. Panda, L. Hennigen, P. Greengard, L. Karlinsky, R. Feris, D. Cox, Z. Wang, and Y. Kim<br> <b style="color:rgb(71, 71, 71)">"Learning to Grow Pretrained Models for Efficient Transformer Training”</b><br>International Conference on Learning Representations (ICLR), 2023. (Spotlight) <a href="https://openreview.net/forum?id=cDYRS5iZ16f">[Paper]</a> <a href="https://github.com/VITA-Group/LiGO">[Code] </a> </li>
<li>S. Yu, J. Hong, H. Wang*, Z. Wang, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">"Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection”</b><br>International Conference on Learning Representations (ICLR), 2023. (Spotlight) <a href="https://openreview.net/forum?id=mMNimwRb7Gr">[Paper]</a> <a href="https://github.com/illidanlab/FOSTER">[Code] </a> </li>
<li>S. Liu*, T. Chen*, X. Chen*, X. Chen*, Q. Xiao, B. Wu, T. Karkkainen, M. Pechenizkiy, D. Mocanu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=bXNl-myZkJl">[Paper]</a> <a href="https://github.com/VITA-Group/SLaK">[Code] </a> </li>
<li>M. Varma*, P. Wang*, X. Chen*, T. Chen*, S. Venugopalan, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Is Attention All That NeRF Needs?”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=xE-LtsE-xx">[Paper]</a> <a href="https://github.com/VITA-Group/GNT">[Code]</a> </li>
<li>Z. Fan*, P. Wang*, Y. Jiang*, X. Gong*, D. Xu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"NeRF-SOS: Any-View Self-supervised Object Segmentation on Complex Scenes”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=kfOtMqYJlUU">[Paper]</a> <a href="https://github.com/VITA-Group/NeRF-SOS">[Code] </a> </li>
<li>Z. Jiang*, Y. Chen, M. Liu, D. Chen, X. Dai, L. Yuan, Z. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Layer Grafted Pre-training: Bridging Contrastive Learning and Masked Image Modeling For Label-Efficient Representations”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=jwdqNwyREyh">[Paper]</a> <a href="https://github.com/VITA-Group/layerGraftedPretraining_ICLR23">[Code]</a> </li>
<li>T. Chen*, C. Gong, D. Diaz, X. Chen*, J. Wells, Q. Liu, Z. Wang, A. Ellington, A. Dimakis, and A. Klivans<br> <b style="color:rgb(71, 71, 71)">"HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=YDJRFWBMNby">[Paper]</a> <a href="https://github.com/VITA-Group/HotProtein">[Code]</a> </li>
<li>P. Wang*, S. Yang, Y. Liu, Z. Wang, and P. Li<br> <b style="color:rgb(71, 71, 71)">"Equivariant Hypergraph Diffusion Neural Operators”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=RiTjKoscnNd">[Paper]</a> <a href="https://github.com/Graph-COM/ED-HNN">[Code]</a></li>
<li>Y. You*, T. Chen*, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">"Graph Domain Adaptation via Theory-Grounded Spectral Regularization”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=OysfLgrk8mk">[Paper]</a> <a href="https://github.com/Shen-Lab/GDA-SpecReg">[Code]</a></li>
<li>J. Yang, X. Chen*, T. Chen*, Z. Wang, and Y. Liang<br> <b style="color:rgb(71, 71, 71)">"M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=s7oOe6cNRT8">[Paper]</a> <a href="https://github.com/VITA-Group/M-L2O">[Code]</a></li>
<li>H. Fan, Z. Wang, Y. Yang, and M. Kankanhalli<br> <b style="color:rgb(71, 71, 71)">"Continuous-Discrete Convolution for (3+1)D Geometry-Sequence Modeling in Proteins”</b><br>International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=P5Z-Zl9XJ7">[Paper]</a> <a href="https://github.com/hehefan/Continuous-Discrete-Convolution">[Code] </a></li>
<li>H. Yang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks”</b><br>International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. <a href="https://arxiv.org/abs/2203.14328">[Paper]</a> <a href="https://github.com/VITA-Group/Random-Pruning-NTK">[Code]</a> </li>
<li>J. Yang, T. Chen*, M. Zhu*, F. He, D. Tao, Y. Liang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Learning to Generalize Provably in Learning to Optimize”</b><br>International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. <a href="https://arxiv.org/abs/2302.11085">[Paper]</a> <a href="https://github.com/VITA-Group/Open-L2O/tree/main/Model_Free_L2O/L2O-Entropy">[Code] </a> </li>
<li>H. Heaton, X. Chen*, Z. Wang, and W. Yin<br> <b style="color:rgb(71, 71, 71)">"Safeguarded Learned Convex Optimization”</b><br>AAAI Conference on Artificial Intelligence (AAAI), 2023. <a href="https://arxiv.org/abs/2003.01880">[Paper]</a> [Code] </li>
<li>J. Hong, H. Wang*, Z. Wang, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">"Federated Robustness Propagation: Sharing Adversarial Robustness in Heterogeneous Federated Learning”</b><br>AAAI Conference on Artificial Intelligence (AAAI), 2023. <a href="https://arxiv.org/abs/2106.10196">[Paper]</a> [Code] </li>
<li>Z. Kong, H. Ma, G. Yuan, M. Sun, Y. Xie, P. Dong, X. Meng, X. Shen, H. Tang, M. Qin, T. Chen*, X. Ma, X. Xie, Z. Wang, and Y. Wang<br> <b style="color:rgb(71, 71, 71)">"Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training”</b><br>AAAI Conference on Artificial Intelligence (AAAI), 2023. <a href="https://arxiv.org/abs/2211.10801">[Paper]</a> <a href="https://github.com/ZLKong/Tri-Level-ViT">[Code] </a> </li>
<li>T. Huang, T. Chen*, M. Fang, V. Menkovski, J. Zhao, L. Yin, Y. Pei, D. Mocanu, Z. Wang, M. Pechenizkiy, and S. Liu*<br> <b style="color:rgb(71, 71, 71)">"You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained Graph Tickets”</b><br>Learning on Graphs Conference (LoG), 2022. (Oral & Best Paper Award) <a href="https://openreview.net/forum?id=dF6aEW3_62O">[Paper]</a> <a href="https://github.com/TienjinHuang/UGTs-LoG">[Code] </a> </li>
<li>Y. Han*, E. Huang, W. Zheng*, N. Rao, Z. Wang, and K. Subbian<br> <b style="color:rgb(71, 71, 71)">“Search Behavior Prediction: A Hypergraph Perspective”</b><br>ACM International Conference on Web Search and Data Mining (WSDM), 2023. <a href="https://arxiv.org/abs/2211.13328">[Paper]</a> <a href="https://github.com/amazon-science/dual-channel-hypergraph-neural-network">[Code] </a> </li>
<li>W. Chen*, W. Huang, X. Gong*, B. Hanin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2205.05662">[Paper]</a> <a href="https://github.com/VITA-Group/architecture_convergence">[Code] </a> </li>
<li>D. Xu*, P. Wang*, Y. Jiang*, Z. Fan*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Signal Processing for Implicit Neural Representations”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2210.08772">[Paper]</a> <a href="https://vita-group.github.io/INSP/">[Code] </a> </li>
<li>H. Liang*, Z. Fan*, R. Sarkar, Z. Jiang*, T. Chen*, K. Zou, Y. Cheng, C. Hao, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“M<sup>3</sup>ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2210.14793">[Paper]</a> <a href="https://github.com/VITA-Group/M3ViT">[Code] </a> </li>
<li>Z. Jiang*, X. Chen*, X. Huang, X. Du, D. Zhou, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://openreview.net/forum?id=mTXQIpXPDbh">[Paper]</a> <a href="https://github.com/VITA-Group/BackRazor_Neurips22">[Code] </a> </li>
<li>R. Cai*, Z. Zhang*, T. Chen*, X. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://openreview.net/forum?id=TItRK4VP9X2">[Paper]</a> <a href="https://github.com/VITA-Group/Random-Shuffling-BackdoorDetect">[Code] </a> </li>
<li>S. Sharan*, W. Zheng*, K. Hsu, J. Xiong, A. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Symbolic Distillation for Learned TCP Congestion Control”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2210.16987">[Paper]</a> <a href="https://github.com/VITA-Group/SymbolicPCC">[Code] </a> </li>
<li>A. Jaiswal*, P. Wang*, T. Chen*, J. Rousseau, Y. Ding, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Old can be Gold: Better Gradient Flow can make Vanilla-GCNs Great Again”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2210.08122">[Paper]</a> <a href="https://github.com/VITA-Group/GradientGCN">[Code]</a></li>
<li>H. Wang*, J. Hong, A. Zhang, J. Zhou, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2210.06428">[Paper]</a> <a href="https://github.com/VITA-Group/Trap-and-Replace-Backdoor-Defense">[Code]</a> </li>
<li>J. Wu*, Y. Liang, F. Han, H. Akbari, Z. Wang, and C. Yu<br> <b style="color:rgb(71, 71, 71)">“Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://openreview.net/forum?id=7WuCttgNQ79">[Paper]</a> [Code] </li>
<li>M. Varma*, X. Chen*, Z. Zhang*, T. Chen*, S. Venugopalan, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Sparse Winning Tickets are Data-Efficient Image Recognizers”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://openreview.net/forum?id=wfKbtSjHA6F">[Paper]</a> <a href="https://github.com/VITA-Group/DataEfficientLTH">[Code]</a> </li>
<li>T. Wei, Y. You*, T. Chen*, Y. Shen, J. He, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2022. <a href="https://arxiv.org/abs/2210.03801">[Paper]</a> <a href="https://github.com/weitianxin/HyperGCL">[Code]</a> </li>
<li>K. Duan, Z. Liu, P. Wang*, W. Zheng*, K. Zhou, T. Chen*, X. Hu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking”</b><br>Advances in Neural Information Processing Systems, Track on Datasets and Benchmarks (NeurIPS D & B), 2022. <a href="https://openreview.net/forum?id=2QrFr_U782Z">[Paper]</a> <a href="https://github.com/VITA-Group/Large_Scale_GCN_Benchmarking">[Code] </a> </li>
<li>D. Xu*, Y. Jiang*, P. Wang*, Z. Fan*, H. Shi, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“SinNeRF: Training Neural Radiance Field on Complex Scenes from a Single Image”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://arxiv.org/abs/2204.00928">[Paper]</a> <a href="https://vita-group.github.io/SinNeRF/">[Code] </a> </li>
<li>Z. Fan*, Y. Jiang*, P. Wang*, X. Gong*, D. Xu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Unified Implicit Neural Stylization”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://arxiv.org/abs/2204.01943">[Paper]</a> <a href="https://zhiwenfan.github.io/INS/">[Code] </a> </li>
<li>X. Chen*, T. Chen*, Y. Cheng, W. Chen, A. Awadallah, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Scalable Learning to Optimize: A Learned Optimizer Can Train Big Models”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136830376.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Scalable-L2O">[Code] </a> </li>
<li>H. Liang*, H. Fan*, Z. Fan*, Y. Wang*, T. Chen*, Y. Cheng, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Point Cloud Domain Adaptation via Masked Local 3D Structure Prediction”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136630159.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/MLSP">[Code] </a> </li>
<li> Z. Jiang*, T. Chen*, X. Chen*, Y. Cheng, L. Zhou, L. Yuan, A. Awadallah, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“DnA: Improving Few-shot Transfer Learning with Low-Rank Decomposition and Alignment”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136800229.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/DnA">[Code] </a> </li>
<li> Y. Jiang*, B. Wronski, B. Mildenhall, J. Barron, Z. Wang, and T. Xue<br> <b style="color:rgb(71, 71, 71)">“Fast and High Quality Image Denoising via Malleable Convolution”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://arxiv.org/abs/2201.00392">[Paper]</a> <a href="https://yifanjiang.net/MalleConv.html">[Code] </a> </li>
<li> W. Chen*, X. Du, F. Yang, L. Beyer, X. Zhai, T. Lin, H. Chen, J. Li, X. Song, Z. Wang, and D. Zhou<br> <b style="color:rgb(71, 71, 71)">“A Simple Single-Scale Vision Transformer for Object Detection and Instance Segmentation”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://arxiv.org/abs/2112.09747">[Paper]</a> [Code] </li>
<li> Z. Mao, A. Jaiswal*, Z. Wang, and S. Chan<br> <b style="color:rgb(71, 71, 71)">“Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and A New Physics-Inspired Transformer Model”</b><br>European Conference on Computer Vision (ECCV), 2022. <a href="https://arxiv.org/abs/2207.10040">[Paper]</a> <a href="https://github.com/VITA-Group/TurbNet">[Code] </a> </li>
<li>H. Wang*, A. Zhang, Y. Zhu, S. Zheng, M. Li, A. Smola, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition”</b><br>International Conference on Machine Learning (ICML), 2022. (Long Talk) <a href="https://proceedings.mlr.press/v162/wang22aq/wang22aq.pdf">[Paper]</a> <a href="https://github.com/amazon-research/long-tailed-ood-detection">[Code]</a> </li>
<li>H. Wang*, A. Zhang, S. Zheng, X. Shi, M. Li, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Removing Batch Normalization Boosts Adversarial Training”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/wang22ap/wang22ap.pdf">[Paper]</a> <a href="https://github.com/amazon-research/normalizer-free-robust-training">[Code]</a> </li>
<li>P. Wang*, Z. Fan*, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Neural Implicit Dictionary Learning via Mixture-of-Expert Training”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/wang22d/wang22d.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Neural-Implicit-Dict">[Code]</a> </li>
<li>A. Jaiswal*, H. Ma, T. Chen*, Y. Ding, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Training Your Sparse Neural Network Better with Any Mask”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/jaiswal22a/jaiswal22a.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/ToST">[Code]</a> </li>
<li>T. Chen*, H. Zhang, Z. Zhang*, S. Chang, S. Liu, P. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Linearity Grafting: How Neuron Pruning Helps Certifiable Robustness”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/chen22af/chen22af.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Linearity-Grafting">[Code]</a> </li>
<li>T. Chen*, X. Chen*, X. Ma, Y. Wang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/chen22a/chen22a.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Structure-LTH">[Code]</a></li>
<li>T. Chen*, Z. Zhang*, S. Liu, Y. Zhang, S. Chang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Data-Efficient Double-Win Lottery Tickets from Robust Pre-training”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/chen22ae/chen22ae.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Double-Win-LTH">[Code]</a> </li>
<li>W. Redman, T Chen*, Z. Wang, and A. Dogra,<br> <b style="color:rgb(71, 71, 71)">“Universality of Winning Tickets: A Renormalization Group Perspective”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/redman22a/redman22a.pdf">[Paper]</a> [Code]</li>
<li>R. Ardywibowo, Z. Huo, Z. Wang, B. Mortazavi, S. Huang, and X. Qian<br> <b style="color:rgb(71, 71, 71)">“VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty”</b><br>International Conference on Machine Learning (ICML), 2022. <a href="https://proceedings.mlr.press/v162/ardywibowo22a/ardywibowo22a.pdf">[Paper]</a> [Code] </li>
<li>D. Hoang*, K. Zhou, T. Chen*, X. Hu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“AutoCoG: A Unified Data-Model Co-Search Framework for Graph Neural Networks”</b><br>International Conference on Automated Machine Learning (AutoML-Conf), 2022. <a href="https://openreview.net/forum?id=r0zIWWar8gq">[Paper]</a> <a href="https://github.com/VITA-Group/AutoCoG"> [Code]</a></li>
<li>J. Hong, Z. Wang, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">“Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent”</b><br>ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022. <a href="https://arxiv.org/abs/2101.07413">[Paper]</a> [Code]</li>
<li>T. Chen*, Z. Zhang*, Y. Cheng, A. Awadallah, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_The_Principle_of_Diversity_Training_Stronger_Vision_Transformers_Calls_for_CVPR_2022_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Diverse-ViT">[Code]</a></li>
<li>T. Chen*, P. Wang*, Z. Fan*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Aug-NeRF_Training_Stronger_Neural_Radiance_Fields_With_Triple-Level_Physically-Grounded_Augmentations_CVPR_2022_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Aug-NeRF">[Code]</a></li>
<li>Z. Fan*, T. Chen*, P. Wang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawing”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. (Oral) <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Fan_CADTransformer_Panoptic_Symbol_Spotting_Transformer_for_CAD_Drawings_CVPR_2022_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/CADTransformer">[Code]</a></li>
<li>T. Chen*, Z. Zhang*, Y. Zhang, S. Chang, S. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Quarantine_Sparsity_Can_Uncover_the_Trojan_Attack_Trigger_for_Free_CVPR_2022_paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Backdoor-LTH">[Code]</a></li>
<li>X. Sun, A. Hassani, Z. Wang, G. Huang, and H. Shi<br> <b style="color:rgb(71, 71, 71)">“DiSparse: Disentangled Sparsification for Multitask Model Compression”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Sun_DiSparse_Disentangled_Sparsification_for_Multitask_Model_Compression_CVPR_2022_paper.pdf">[Paper]</a> <a href="https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression">[Code]</a></li>
<li>Z. Chen, Y. Chen, J. Liu, X. Xu, V. Goel, Z. Wang, H. Shi, and X. Wang<br> <b style="color:rgb(71, 71, 71)">“VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_VideoINR_Learning_Video_Implicit_Neural_Representation_for_Continuous_Space-Time_Super-Resolution_CVPR_2022_paper.pdf">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/VideoINR-Continuous-Space-Time-Super-Resolution">[Code]</a></li>
<li>H. Ma, H. Zhao, Z. Lin, A. Kale, Z. Wang, T. Yu, J. Gu, S. Choudhary, and X. Xie<br> <b style="color:rgb(71, 71, 71)">“EI-CLIP: Entity-aware Interventional Contrastive Learning for E-commerce Cross-modal Retrieval”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Ma_EI-CLIP_Entity-Aware_Interventional_Contrastive_Learning_for_E-Commerce_Cross-Modal_Retrieval_CVPR_2022_paper.pdf">[Paper]</a> <a href="">[Code]</a></li>
<li>W. Zheng*, T. Chen*, T. Hu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Symbolic Learning to Optimize: Towards Interpretability and Scalability”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=ef0nInZHKIC">[Paper]</a> <a href="https://github.com/VITA-Group/Symbolic-Learning-To-Optimize">[Code]</a></li>
<li>X. Chen*, J. Zhang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=moHCzz6D5H3">[Paper]</a> <a href="https://github.com/VITA-Group/Peek-a-Boo">[Code]</a></li>
<li>T. Huang*, T. Chen*, S. Liu, S. Chang, L. Amini, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Optimizer Amalgamation”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=VqzXzA9hjaX">[Paper]</a> <a href="https://github.com/VITA-Group/Optimizer_Amalgamation">[Code]</a></li>
<li>T. Chen*, Z. Zhang*, P. Wang, S. Balachandra*, H. Ma, Z. Wang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Sparsity Winning Twice: Better Robust Generalization from More Efficient Training”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=SYuJXrXq8tw">[Paper]</a> <a href="https://github.com/VITA-Group/Sparsity-Win-Robust-Generalization">[Code]</a></li>
<li>P. Wang*, W. Zheng*, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=O476oWmiNNp">[Paper]</a> <a href="https://github.com/VITA-Group/ViT-Anti-Oversmoothing">[Code]</a></li>
<li>W. Chen*, W Huang, X. Du, X. Song, Z. Wang, and D. Zhou<br> <b style="color:rgb(71, 71, 71)">“Auto-Scaling Vision Transformers without Training”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=H94a1_Pyr-6">[Paper]</a> <a href="https://github.com/VITA-Group/AsViT">[Code]</a></li>
<li>S. Yu*, T. Chen*, J. Shen*, H. Yuan, J. Tian, S. Yang, J. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Unified Visual Transformer Compression”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=9jsZiUgkCZP">[Paper]</a> <a href="https://github.com/VITA-Group/UVC">[Code]</a></li>
<li>M. Lu*, X. Luo*, T. Chen*, W. Chen*, D. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining”</b><br>International Conference on Learning Representations (ICLR), 2022. (Spotlight) <a href="https://openreview.net/forum?id=O1DEtITim__">[Paper]</a> <a href="https://github.com/VITA-Group/SFW-Once-for-All-Pruning">[Code]</a></li>
<li>W. Zheng*, E. Huang, N. Rao, S. Katariya, Z. Wang, and K. Subbian<br> <b style="color:rgb(71, 71, 71)">“Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=1ugNpm7W6E">[Paper]</a> <a href="https://github.com/amazon-research/gnn-tail-generalization">[Code]</a></li>
<li>S. Ding, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Audio Lottery: Speech Recognition Made Ultra-Lightweight, Noise-Robust, and Transferable”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=9Nk6AJkVYB">[Paper]</a> <a href="https://github.com/VITA-Group/Audio-Lottery">[Code]</a></li>
<li>J. Hong, H. Wang*, Z. Wang, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">“Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=_QLmakITKg">[Paper]</a> <a href="https://github.com/illidanlab/SplitMix">[Code]</a></li>
<li>S. Liu, T. Chen*, Z. Atashgahi, X. Chen*, G. Sokar, E. Mocanu, M. Pechenizkiy, Z. Wang, and D. Mocanu<br> <b style="color:rgb(71, 71, 71)">“Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=RLtqs6pzj1-">[Paper]</a> <a href="https://github.com/VITA-Group/FreeTickets">[Code]</a></li>
<li>S. Liu, T. Chen*, X. Chen*, L. Shen, D. Mocanu, Z. Wang, and M. Pechenizkiy<br> <b style="color:rgb(71, 71, 71)">“The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=VBZJ_3tz-t">[Paper]</a> <a href="https://github.com/VITA-Group/Random_Pruning">[Code]</a></li>
<li>Y. You*, Y. Cao, T. Chen*, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How”</b><br>International Conference on Learning Representations (ICLR), 2022. <a href="https://openreview.net/forum?id=EVVadRFRgL7">[Paper]</a> <a href="https://github.com/Shen-Lab/Bayesian-L2O">[Code]</a></li>
<li> R. Ardywibowo, S. Boluki, Z. Wang, B. Mortazavi, S. Huang, and X. Qian<br> <b style="color:rgb(71, 71, 71)">“VFDS: Variational Foresight Dynamic Selection in Bayesian Neural Networks for Efficient Human Activity Recognition”</b><br>International Conference on Artificial Intelligence and Statistics (AISTATS), 2022. <a href="https://arxiv.org/abs/2204.00130">[Paper]</a> <a href="">[Code]</a></li>
<li> S. Bibikar, H. Vikalo, Z. Wang, and X. Chen* (X. C. as corresponding author)<br> <b style="color:rgb(71, 71, 71)">“Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better”</b><br>AAAI Conference on Artificial Intelligence (AAAI), 2022. <a href="https://arxiv.org/abs/2112.09824">[Paper]</a> <a href="https://github.com/bibikar/feddst">[Code]</a></li>
<li>Y. You*, T. Chen*, Z. Wang and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations”</b><br>ACM International Conference on Web Search and Data Mining (WSDM), 2022. <a href="https://arxiv.org/abs/2201.01702">[Paper]</a> <a href="https://github.com/Shen-Lab/GraphCL_Automated">[Code]</a></li>
<li>Y. Jiang*, S. Chang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2102.07074">[Paper]</a> <a href="https://github.com/VITA-Group/TransGAN">[Code]</a></li>
<li>H. Wang*, C. Xiao, J. Kossaifi, Z. Yu, A. Anandkumar, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“AugMax: Adversarial Composition of Random Augmentations for Robust Training”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2110.13771">[Paper]</a> <a href="https://github.com/VITA-Group/AugMax">[Code]</a></li>
<li>T. Chen*, Y. Cheng, Z. Gan, J. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2103.00397">[Paper]</a> <a href="https://github.com/VITA-Group/Ultra-Data-Efficient-GAN-Training">[Code]</a></li>
<li>X. Chen*, Y. Cheng, S. Wang, Z. Gan, J. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“The Elastic Lottery Ticket Hypothesis”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2103.16547">[Paper]</a> <a href="https://github.com/VITA-Group/ElasticLTH">[Code]</a></li>
<li>T. Chen*, Y. Cheng, Z. Gan, L. Yuan, L. Zhang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Chasing Sparsity in Vision Transformers: An End-to-End Exploration”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2106.04533">[Paper]</a> <a href="https://github.com/VITA-Group/SViTE">[Code]</a></li>
<li>W. Zheng*, Q. Guo, H. Yang, P. Wang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2110.15926">[Paper]</a> <a href="https://github.com/VITA-Group/DePT">[Code]</a></li>
<li>X. Chen*, T. Chen*, Z. Zhang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“You Are Caught Stealing My Winning Lottery Ticket! Making a Lottery Ticket Claim its Ownership”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2111.00162">[Paper]</a> <a href="https://github.com/VITA-Group/NO-stealing-LTH">[Code]</a></li>
<li>Z. Jiang*, T. Chen*, T. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2111.01004">[Paper]</a> <a href="https://github.com/VITA-Group/MAK">[Code]</a></li>
<li>X. Chen*, J. Liu, Z. Wang, W. Yin<br> <b style="color:rgb(71, 71, 71)">“Hyperparameter Tuning is All You Need for LISTA”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2110.15900">[Paper]</a> <a href="https://github.com/VITA-Group/HyperLISTA">[Code]</a></li>
<li>J. Wu*, X. Dai, D. Chen, Y. Chen, M. Liu, Y. Yu, Z. Wang, Z. Liu, M. Chen, and L. Yuan<br> <b style="color:rgb(71, 71, 71)">“Stronger NAS with Weaker Predictors”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2102.10490">[Paper]</a> <a href="https://github.com/VITA-Group/WeakNAS">[Code]</a></li>
<li>B. Pan, R. Panda, Y. Jiang*, Z. Wang, R. Feris, and A. Oliva<br> <b style="color:rgb(71, 71, 71)">“IA-RED<sup>2</sup>: Interpretability-Aware Redundancy Reduction for Vision Transformers”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2106.12620">[Paper]</a> <a href="http://people.csail.mit.edu/bpan/ia-red/">[Code]</a></li>
<li>S. Liu, T. Chen*, X. Chen*, Z. Atashgahi, L. Yin, H. Kou, L. Shen, M. Pechenizkiy, Z. Wang, and D. Mocanu<br> <b style="color:rgb(71, 71, 71)">“Sparse Training via Boosting Pruning Plasticity with Neuroregeneration”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2106.10404">[Paper]</a> <a href="https://github.com/VITA-Group/GraNet">[Code]</a></li>
<li>X. Ma, G. Yuan, X. Shen, T. Chen*, X. Chen*, X. Chen*, N. Liu, M. Qin, S. Liu, Z. Wang, and Y. Wang<br> <b style="color:rgb(71, 71, 71)">“Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2021. <a href="https://arxiv.org/abs/2107.00166">[Paper]</a> <a href="https://github.com/boone891214/sanity-check-LTH">[Code]</a></li>
<li> Y. Jiang*, H. Zhang, J. Zhang, Y. Wang, Z. Lin, K. Sunkavalli, S. Chen, S. Amirghods, S. Kong, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“SSH: A Self-Supervised Framework for Image Harmonization”</b><br> IEEE International Conference on Computer Vision (ICCV), 2021. <a href="https://arxiv.org/pdf/2108.06805.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/SSHarmonization">[Code]</a></li>
<li> X. Gong*, H. Wang, M. Shou, M. Feiszli, Z. Wang, and Z. Yan<br> <b style="color:rgb(71, 71, 71)">“Searching for Two-Stream Models in Multivariate Space for Video Recognition”</b><br> IEEE International Conference on Computer Vision (ICCV), 2021. <a href="https://arxiv.org/abs/2108.12957">[Paper]</a> [Code]</li>
<li> Y. Guo, H. Yuan, J. Tan, Z. Wang, S. Yang, and J. Liu<br> <b style="color:rgb(71, 71, 71)">“GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization”</b><br> IEEE International Conference on Computer Vision (ICCV), 2021. <a href="https://arxiv.org/abs/2109.02220">[Paper]</a> [Code]</li>
<li>Y. You*, T. Chen*, Y. Shen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Graph Contrastive Learning Automated”</b><br> International Conference on Machine Learning (ICML), 2021. (Long Talk) <a href="https://arxiv.org/abs/2106.07594">[Paper]</a><a href="https://github.com/Shen-Lab/GraphCL_Automated">[Code]</a> </li>
<li>M. Zhu*, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm”</b><br> International Conference on Machine Learning (ICML), 2021. (Long Talk) <a href="https://arxiv.org/abs/2106.06027">[Paper]</a><a href="https://github.com/VITA-Group/SparseADV_Homotopy">[Code]</a> </li>
<li> T. Chen*, Y. Sui, X. Chen*, A. Zhang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“A Unified Lottery Ticket Hypothesis for Graph Neural Networks”</b><br> International Conference on Machine Learning (ICML), 2021. <a href="https://arxiv.org/abs/2102.06790">[Paper]</a><a href="https://github.com/VITA-Group/Unified-LTH-GNN">[Code]</a> </li>
<li> Z. Jiang*, T. Chen*, B. Mortazavi, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Self-Damaging Contrastive Learning”</b><br> International Conference on Machine Learning (ICML), 2021. <a href="https://arxiv.org/abs/2106.02990">[Paper]</a><a href="https://github.com/VITA-Group/SDCLR">[Code]</a> </li>
<li> Z. Zhang*, X. Chen*, T. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Efficient Lottery Ticket Finding: Less Data is More”</b><br> International Conference on Machine Learning (ICML), 2021. <a href="https://arxiv.org/abs/2106.03225">[Paper]</a><a href="https://github.com/VITA-Group/PrAC-LTH">[Code]</a> </li>
<li>X. Chen*, Y. Cheng, S. Wang, Z. Gan, Z. Wang, and J. Liu<br> <b style="color:rgb(71, 71, 71)">“EarlyBERT: Efficient BERT Training via Early-Bird Lottery Tickets”</b><br> Annual Meeting of the Association for Computational Linguistics (ACL), 2021. (Long) <a href="https://arxiv.org/abs/2101.00063">[Paper]</a><a href="https://github.com/VITA-Group/EarlyBERT">[Code]</a> </li>
<li> J. Hong, Z. Zhu, S. Yu, Z. Wang, H. Dodge, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">“Federated Adversarial Debiasing for Fair and Transferable Representations”</b><br> ACM Conference on Knowledge Discovery and Data Mining (KDD), 2021. <a href="https://dl.acm.org/doi/10.1145/3447548.3467281">[Paper]</a> <a href="https://github.com/illidanlab/FADE">[Code]</a> </li>
<li>T. Chen*, J. Frankle, S. Chang, S. Liu, Y. Zhang, M. Carbin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. <a href="https://arxiv.org/abs/2012.06908">[Paper]</a><a href="https://github.com/VITA-Group/CV_LTH_Pre-training">[Code]</a></li>
<li>Z. Wang, H. Wang*, T. Chen*, Z. Wang, and K. Ma<br> <b style="color:rgb(71, 71, 71)">“Troubleshooting Blind Image Quality Models in the Wild”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. <a href="https://arxiv.org/abs/2105.06747">[Paper]</a> <a href="https://github.com/wangzhihua520/troubleshooting_BIQA">[Code]</a></li>
<li>P. Cao, Z. Wang, and K. Ma<br> <b style="color:rgb(71, 71, 71)">“Debiased Subjective Assessment of Real-World Image Enhancement”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. <a href="https://openaccess.thecvf.com/content/CVPR2021/papers/Cao_Debiased_Subjective_Assessment_of_Real-World_Image_Enhancement_CVPR_2021_paper.pdf">[Paper]</a> [Code]
<li>H. Ma, T. Chen*, T. Hu*, C. You, X. Xie, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Undistillable: Making A Nasty Teacher That CANNOT Teach Students”</b><br> International Conference on Learning Representations (ICLR), 2021. (Spotlight) <a href="https://openreview.net/forum?id=0zvfm-nZqQs">[Paper]</a><a href="https://github.com/VITA-Group/Nasty-Teacher">[Code]</a>
<li>T. Chen*, Z. Zhang*, S. Liu, S. Chang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=LXMSvPmsm0g">[Paper]</a> <a href="https://github.com/VITA-Group/Lifelong-Learning-LTH">[Code]</a>
<li>T. Chen*, Z. Zhang*, S. Liu, S. Chang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Robust Overfitting May be Mitigated by Properly Learned Smoothening”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=qZzy5urZw9">[Paper]</a> <a href="https://github.com/VITA-Group/Alleviate-Robust-Overfitting">[Code]</a>
<li>W. Chen*, X. Gong*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=Cnon5ezMHtu">[Paper]</a> <a href="https://github.com/VITA-Group/TENAS">[Code]</a>
<li>W. Chen*, Z. Yu, S. Mello, S. Liu, J. Alvarez, Z. Wang, and A. Anandkumar<br> <b style="color:rgb(71, 71, 71)">“Contrastive Syn-to-Real Generalization”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=F8whUO8HNbP">[Paper]</a> <a href=" https://github.com/NVlabs/CSG">[Code]</a>
<li>T. Meng, X. Chen*, Y. Jiang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“A Design Space Study for LISTA and Beyond”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=GMgHyUPrXa">[Paper]</a> <a href="https://github.com/google-research/google-research/tree/master/lista_design_space">[Code]</a>
<li>J. Shen*, X. Chen*, H. Heaton, T. Chen*, J. Liu, W. Yin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Learning A Minimax Optimizer: A Pilot Study”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=nkIDwI6oO4_">[Paper]</a> <a href="https://github.com/VITA-Group/L2O-Minimax">[Code]</a>
<li>J. Shen*, H. Wang*, S. Gui, J. Tan, Z. Wang, and J. Liu<br> <b style="color:rgb(71, 71, 71)">“UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems”</b><br> International Conference on Learning Representations (ICLR), 2021. <a href="https://openreview.net/forum?id=BM---bH_RSh">[Paper]</a> <a href="https://github.com/VITA-Group/UMEC">[Code]</a>
<li>J. Hong, H. Wang*, Z. Wang, and J. Zhou <br> <b style="color:rgb(71, 71, 71)">“Learning Model-Based Privacy Protection under Budget Constraints”</b><br> AAAI Conference on Artificial Intelligence (AAAI), 2021.<a href="https://www.aaai.org/AAAI21Papers/AAAI-7394.HongJ.pdf">[Paper]</a> [Code]
<li>T. Chen*, W. Zhang, J. Zhou, S. Chang, S. Liu, L. Amini, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Training Stronger Baselines for Learning to Optimize”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. (Spotlight) <a href="https://arxiv.org/abs/2010.09089">[Paper]</a> <a href="https://github.com/VITA-Group/L2O-Training-Techniques">[Code]</a>
<li>H. Wang*, T. Chen*, S. Gui, T. Hu*, J. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://arxiv.org/abs/2010.11828">[Paper]</a> <a href="https://github.com/VITA-Group/Once-for-All-Adversarial-Training">[Code]</a>
<li>T. Chen*, J. Frankle, S. Chang, S. Liu, Y. Zhang, Z. Wang, and M. Carbin<br> <b style="color:rgb(71, 71, 71)">“The Lottery Ticket Hypothesis for Pre-trained BERT Networks”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://arxiv.org/abs/2007.12223">[Paper]</a> <a href="https://github.com/VITA-Group/BERT-Tickets">[Code]</a>
<li>X. Chen*, Z. Wang, S. Tang, and K. Muandet<br> <b style="color:rgb(71, 71, 71)">“MATE: Plugging in Model Awareness to Task Embedding for Meta Learning”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://proceedings.neurips.cc/paper/2020/file/8989e07fc124e7a9bcbdebcc8ace2bc0-Paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/MATE">[Code]</a>
<li>Z. Jiang*, T. Chen*, T. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Robust Pre-Training by Adversarial Contrastive Learning”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://arxiv.org/abs/2010.13337">[Paper]</a> <a href="https://github.com/VITA-Group/ACL_Neurips20">[Code]</a>
<li>Y. You*, T. Chen*, Y. Sui, T. Chen, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“Graph Contrastive Learning with Augmentations”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://arxiv.org/abs/2010.13902">[Paper]</a> <a href="https://github.com/VITA-Group/GraphCL">[Code]</a>
<li>H. You, X. Chen*, Y. Zhang, C. Li, S. Li, Z. Liu, Z. Wang, and Y. Lin<br> <b style="color:rgb(71, 71, 71)">“ShiftAddNet: A Hardware-Inspired Deep Network”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://arxiv.org/abs/2010.12785">[Paper]</a> <a href="https://github.com/VITA-Group/ShiftAddNet">[Code]</a>
<li>Y. Fu, H. You, Y. Zhao, Y. Wang, C. Li, K. Gopalakrishnan, Z. Wang, and Y. Lin<br> <b style="color:rgb(71, 71, 71)">“FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2020. <a href="https://proceedings.neurips.cc/paper/2020/file/8dc5983b8c4ef1d8fcd5f325f9a65511-Paper.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/FracTrain">[Code]</a>
<li>H. Wang*, S. Gui, H. Yang, J. Liu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework”</b><br> European Conference on Computer Vision (ECCV), 2020. (Spotlight) <a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123490052.pdf">[Paper]</a><a href="https://github.com/VITA-Group/GAN-Slimming">[Code]</a></li>
<li>S. Yang*, Z. Wang, J. Liu, and Z. Guo<br> <b style="color:rgb(71, 71, 71)">“Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches”</b><br> European Conference on Computer Vision (ECCV), 2020. <a href="https://arxiv.org/abs/2001.02890">[Paper]</a> <a href="https://github.com/VITA-Group/DeepPS">[Code]</a></li>
<li>C. Li, T. Chen*, H. You, Z. Wang, and Y Lin<br> <b style="color:rgb(71, 71, 71)">“HALO: Hardware-Aware Learning to Optimize”</b><br> European Conference on Computer Vision (ECCV), 2020. <a href="http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123540477.pdf">[Paper]</a> <a href="https://github.com/RICE-EIC/HALO">[Code]</a></li>
<li>Z. Huo, A. PakBin, X. Chen*, N. Hurley, Y. Yuan*, X. Qian, Z. Wang, S. Huang, and B. Mortazavi<br> <b style="color:rgb(71, 71, 71)">“Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery”</b><br>International Conference on Artificial Intelligence and Statistics (AISTATS), 2020. <a href="https://arxiv.org/abs/2003.01753">[Paper]</a> [Code]</li>
<li>W. Chen*, Z. Yu, Z. Wang, and A. Anandkumar<br> <b style="color:rgb(71, 71, 71)">“Automated Synthetic-to-Real Generalization”</b><br> International Conference on Machine Learning (ICML), 2020. <a href="http://proceedings.mlr.press/v119/chen20x/chen20x.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/ASG">[Code]</a></li>
<li>X. Chen*, W. Chen*, T. Chen*, Y. Yuan*, C. Gong, K. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training”</b><br> International Conference on Machine Learning (ICML), 2020. <a href="https://arxiv.org/abs/2006.11280">[Paper]</a> <a href="https://github.com/TAMU-VITA/Self-PU">[Code]</a></li>
<li>Y. You*, T. Chen*, Z. Wang, and Y. Shen <br> <b style="color:rgb(71, 71, 71)">“When Does Self-Supervision Help Graph Convolutional Networks?”</b><br> International Conference on Machine Learning (ICML), 2020. <a href="https://arxiv.org/abs/2006.09136">[Paper]</a> <a href="https://github.com/Shen-Lab/SS-GCNs">[Code]</a></li>
<li>R. Oftadeh, J. Shen*, Z. Wang, and D. Shell<br> <b style="color:rgb(71, 71, 71)">“Eliminating the Invariance on the Loss Landscape of Linear Autoencoders”</b><br> International Conference on Machine Learning (ICML), 2020. <a href="http://proceedings.mlr.press/v119/oftadeh20a/oftadeh20a.pdf">[Paper]</a> [Code]</li>
<li>Y. Fu, W. Chen*, H. Wang*, H. Li, Y. Lin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks”</b><br> International Conference on Machine Learning (ICML), 2020. <a href="https://arxiv.org/abs/2006.08198">[Paper]</a> <a href="https://github.com/TAMU-VITA/AGD">[Code]</a></li>
<li>R. Ardywibowo, S. Boluki, X. Gong*, Z. Wang, and X. Qian<br> <b style="color:rgb(71, 71, 71)">“NADS: Neural Architecture Distribution Search for Uncertainty Awareness”</b><br> International Conference on Machine Learning (ICML), 2020. <a href="https://arxiv.org/abs/2006.06646">[Paper]</a> <a href="https://github.com/ardywibowo/NADS">[Code]</a></li>
<li>Y. Zhao, X. Chen*, Y. Wang, C. Li, Y. Xie, Z. Wang, and Y. Lin<br> <b style="color:rgb(71, 71, 71)">“SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation”</b><br> IEEE/ACM International Symposium on Computer Architecture (ISCA), 2020. <a href="https://arxiv.org/abs/2005.03403">[Paper]</a> [Code]</li>
<li>T. Chen*, S. Liu, S. Chang, Y. Cheng, L. Amini, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Adversarial Robustness: From Self-Supervised Pretraining to Fine-Tuning”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. <a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Chen_Adversarial_Robustness_From_Self-Supervised_Pre-Training_to_Fine-Tuning_CVPR_2020_paper.pdf">[Paper]</a> <a href="https://github.com/TAMU-VITA/Adv-SS-Pretraining">[Code]</a></li>
<li>Z. Jiang*, B. Liu, S. Schulter, Z. Wang, and M. Chandraker<br> <b style="color:rgb(71, 71, 71)">“Peek-a-boo: Occlusion Reasoning in Indoor Scenes with Plane Representations”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. (Oral) <a href="https://openaccess.thecvf.com/content_CVPR_2020/papers/Jiang_Peek-a-Boo_Occlusion_Reasoning_in_Indoor_Scenes_With_Plane_Representations_CVPR_2020_paper.pdf">[Paper]</a> [Code]</li>
<li>Y. You*, T. Chen*, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“L<sup>2</sup>-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. <a href="https://arxiv.org/abs/2003.13606">[Paper]</a> <a href="https://github.com/TAMU-VITA/L2-GCN">[Code]</a></li>
<li>T. Hu*, T. Chen*, H. Wang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference"</b><br> International Conference on Learning Representations (ICLR), 2020. <a href="https://openreview.net/forum?id=rJgzzJHtDB">[Paper]</a> <a href="https://github.com/TAMU-VITA/triple-wins">[Code]</a></li>
<li>W. Chen*, X. Gong*, X. Liu, Q. Zhang, Y. Li and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"FasterSeg: Searching for Faster Real-time Semantic Segmentation"</b><br> International Conference on Learning Representations (ICLR), 2020. <a href="https://openreview.net/forum?id=BJgqQ6NYvB">[Paper]</a> <a href="https://github.com/TAMU-VITA/FasterSeg">[Code]</a></li>
<li>H. Wang*, T. Chen*, Z. Wang, and K. Ma<br> <b style="color:rgb(71, 71, 71)">"I am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively"</b><br> International Conference on Learning Representations (ICLR), 2020. <a href="https://openreview.net/forum?id=rJehNT4YPr">[Paper]</a> <a href="https://github.com/TAMU-VITA/MAD">[Code]</a></li>
<li>H. You, C. Li, P. Xu, Y. Fu, Y. Wang, X. Chen*, R. Baraniuk, Z. Wang, and Y. Lin<br> <b style="color:rgb(71, 71, 71)">“Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks"</b><br> International Conference on Learning Representations (ICLR), 2020. (Spotlight) <a href="https://openreview.net/forum?id=BJxsrgStvr">[Paper]</a> <a href="https://github.com/RICE-EIC/Early-Bird-Tickets">[Code]</a></li>
<li>J. Shen*, Y. Wang*, P. Xu, Y. Fu, Z. Wang, and Y. Lin<br> <b style="color:rgb(71, 71, 71)">“Fractional Skipping: Toward Finer-Grained Dynamic Inference”</b><br> AAAI Conference on Artificial Intelligence (AAAI), 2020. <a href="https://arxiv.org/abs/2001.00705">[Paper]</a> <a href="https://github.com/VITA-Group/DFS">[Code]</a></li>
<li>S. Mohseni*, M. Pitale, J. Yadawa, and Z. Wang<br> <b style="color:rgb(71, 71, 71)"> “Self-Supervised Learning for Generalizable Out-of-Distribution Detection”</b><br> AAAI Conference on Artificial Intelligence (AAAI), 2020. <a href="http://people.tamu.edu/~sina.mohseni/papers/Self_Supervised_Learning_for_Generalizable_Out_of_Distribution_Detection.pdf">[Paper]</a> [Code]</li>
<li>Z. Jiang*, Y. Wang*, X. Chen*, P. Xu, Y. Zhao, Y. Lin, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“E<sup>2</sup>-Train: Training State-of-the-art CNNs with Over 80% Energy Savings”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2019. <a href="https://arxiv.org/abs/1910.13349">[Paper]</a> <a href="https://github.com/RICE-EIC/E2Train">[Code]</a></li>
<li>S. Gui, H. Wang*, H. Yang, C. Yu, Z. Wang, and J. Liu<br> <b style="color:rgb(71, 71, 71)">“Model Compression with Adversarial Robustness: A Unified Optimization Framework”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2019. <a href="https://arxiv.org/abs/1902.03538">[Paper]</a> <a href="https://github.com/TAMU-VITA/ATMC">[Code]</a></li>
<li>Y. Cao, T. Chen*, Z. Wang, and Y. Shen<br> <b style="color:rgb(71, 71, 71)">“Learning to Optimize in Swarms”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2019. <a href="https://arxiv.org/abs/1911.03787">[Paper]</a> <a href="https://github.com/Shen-Lab/LOIS">[Code]</a></li>
<li>X. Jia, S. Wang*, X. Liang, A. Balagopal, D. Nguyen, M. Yang, Z. Wang, X. Qian, X. Ji, and S. Jiang<br> <b style="color:rgb(71, 71, 71)">“Cone-Beam Computed Tomography (CBCT) Segmentation by Adversarial Learning Domain Adaptation”</b><br> Medical Image Computing and Computer Assisted Interventions (MICCAI), 2019 <a href="https://link.springer.com/chapter/10.1007/978-3-030-32226-7_63">[Paper]</a> [Code]</li>
<li>R. Ardywibowo, G. Zhao, Z. Wang, B. Mortazavi, S. Huang, and X. Qian, <br> <b style="color:rgb(71, 71, 71)">“Activity Monitoring with Uncertainty Quantification in Switching Gaussian Process Models”</b><br> International Conference on Artificial Intelligence and Statistics (AISTATS), 2019 <a href="http://proceedings.mlr.press/v89/ardywibowo19a.html">[Paper]</a> [Code]</li>
<li>S. Yang*, Z. Wang, Z Wang, N. Xu, J. Liu, and Z. Guo<br> <b style="color:rgb(71, 71, 71)">“Controllable Artistic Text Style Transfer via Shape-Matching GAN”</b><br> IEEE International Conference on Computer Vision (ICCV), 2019. (Oral) <a href="https://arxiv.org/abs/1905.01354">[Paper]</a> <a href="https://github.com/TAMU-VITA/ShapeMatchingGAN">[Code]</a></li>
<li>Z. Wu*, K. Suresh, P. Narayanan, H. Xu, H. Kwon, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Delving into Robust Object Detection from Unmanned Aerial Vehicles: A Deep Nuisance Disentanglement Approach”</b><br> IEEE International Conference on Computer Vision (ICCV), 2019. <a href="https://arxiv.org/abs/1908.03856">[Paper]</a> <a href="https://github.com/TAMU-VITA/UAV-NDFT">[Code]</a></li>
<li>X. Gong*, S. Chang, Y. Jiang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“AutoGAN: Neural Architecture Search for Generative Adversarial Networks”</b><br> IEEE International Conference on Computer Vision (ICCV), 2019. <a href="https://arxiv.org/abs/1908.03835">[Paper]</a> <a href="https://github.com/TAMU-VITA/AutoGAN">[Code]</a></li>
<li>T. Chen*, S. Ding, J. Xie, Y. Yuan*, W. Chen*, Y. Yang, Z. Ren, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“ABD-Net: Attentive but Diverse Person Re-Identification”</b><br> IEEE International Conference on Computer Vision (ICCV), 2019. <a href="https://arxiv.org/abs/1908.01114">[Paper]</a> <a href="https://github.com/TAMU-VITA/ABD-Net">[Code]</a></li>
<li>O. Kupyn, T. Martyniuk, J. Wu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better”</b><br> IEEE International Conference on Computer Vision (ICCV), 2019. <a href="https://arxiv.org/abs/1908.03826">[Paper]</a> <a href="https://github.com/TAMU-VITA/DeblurGANv2">[Code]</a></li>
<li>E. Ryu, J. Liu, S. Wang*, X. Chen*, Z. Wang, and W. Yin<br> <b style="color:rgb(71, 71, 71)">“Plug-and-Play Methods Provably Converge with Properly Trained Denoisers”</b><br> International Conference on Machine Learning (ICML), 2019. <a href="https://arxiv.org/abs/1905.05406">[Paper]</a> <a href="https://github.com/TAMU-VITA/Provable_Plug_and_Play">[Code]</a></li>
<li>W. Chen*, Z. Jiang*, Z. Wang, K. Cui, and X. Qian<br> <b style="color:rgb(71, 71, 71)">“Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-high Resolution Images”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. (Oral) <a href="https://arxiv.org/abs/1905.06368">[Paper]</a> <a href="https://github.com/TAMU-VITA/GLNet">[Code]</a></li>
<li>S. Li, I. B. Araujo*, W. Ren, Z. Wang, E. K. Tokuda*, R. Hirata, R. Cesar, J. Zhang, X. Guo, and X. Cao<br> <b style="color:rgb(71, 71, 71)">“Single Image Deraining: A Comprehensive Benchmark Analysis”</b><br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. <a href="https://arxiv.org/abs/1903.08558">[Paper]</a> <a href="https://github.com/lsy17096535/Single-Image-Deraining">[Code]</a></li>
<li>J. Liu, X. Chen*, Z. Wang, and W. Yin<br> <b style="color:rgb(71, 71, 71)">“ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA”</b><br> International Conference on Learning Representations (ICLR), 2019. <a href="https://openreview.net/forum?id=B1lnzn0ctQ">[Paper]</a> <a href="https://github.com/TAMU-VITA/ALISTA">[Code]</a></li>
<li>X. Chen*, J. Liu, Z. Wang, and W. Yin<br> <b style="color:rgb(71, 71, 71)">“Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2018. (Spotlight) <a href="http://papers.nips.cc/paper/8120-theoretical-linear-convergence-of-unfolded-ista-and-its-practical-weights-and-thresholds">[Paper]</a> <a href="https://github.com/TAMU-VITA/LISTA-CPSS">[Code]</a></li>
<li>N. Bansal*, X. Chen*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?”</b><br> Advances in Neural Information Processing Systems (NeurIPS), 2018. <a href="http://papers.nips.cc/paper/7680-can-we-gain-more-from-orthogonality-regularizations-in-training-deep-networks">[Paper]</a> <a href="https://github.com/TAMU-VITA/Orthogonality-in-CNNs">[Code]</a></li>
<li>Z. Wu*, Z. Wang, Z. Wang, and H. Jin<br> <b style="color:rgb(71, 71, 71)">“Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study”</b><br> European Conference on Computer Vision (ECCV), 2018. <a href="https://openaccess.thecvf.com/content_ECCV_2018/html/Zhenyu_Wu_Towards_Privacy-Preserving_Visual_ECCV_2018_paper.html">[Paper]</a> <a href="https://github.com/TAMU-VITA/Privacy-AdversarialLearning">[Code]</a></li>
<li>M. Sun, I. Baytas, L. Zhan, Z. Wang, and J. Zhou<br> <b style="color:rgb(71, 71, 71)">“Subspace Network: Deep Multi-Task Censored Regression for Modeling Neurodegenerative Diseases”</b><br> ACM Conference on Knowledge Discovery and Data Mining (KDD), 2018. <a href="https://dl.acm.org/doi/abs/10.1145/3219819.3219966">[Paper]</a> <a href="https://github.com/illidanlab/subspace-net">[Code]</a></li>
<li>J. Wu*, Y. Wang*, Z. Wu*, Z. Wang, A. Veeraraghavan, and Y. Lin<br> <b style="color:rgb(71, 71, 71)">“Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions”</b><br> International Conference on Machine Learning (ICML), 2018. <a href="http://proceedings.mlr.press/v80/wu18h.html">[Paper]</a> <a href="https://github.com/TAMU-VITA/Deep-K-Means-pytorch">[Code]</a></li>
</ul>
</div>
</div>
</div>
</div>
<!-- END section -->
<div class="footer">
<div class="container">
<div class="row">
<div class="col-12">
<div class="copyright">
<p>
<!-- Link back to Colorlib can't be removed. Template is licensed under CC BY 3.0. -->
Copyright ©<script>document.write(new Date().getFullYear());</script>
All rights reserved | Built upon <a
href="https://colorlib.com" target="_blank">Colorlib</a>
<!-- Link back to Colorlib can't be removed. Template is licensed under CC BY 3.0. -->
</p>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- .site-wrap -->
<!-- loader -->
<!-- <div id="loader" class="show fullscreen">
<svg class="circular" width="48px" height="48px">
<circle class="path-bg" cx="24" cy="24" r="22" fill="none" stroke-width="4" stroke="#eeeeee"/>
<circle class="path" cx="24" cy="24" r="22" fill="none" stroke-width="4" stroke-miterlimit="10"
stroke="#ff5e15"/>
</svg>
</div> -->
<script src="js/jquery-3.3.1.min.js"></script>
<script src="js/jquery-migrate-3.0.1.min.js"></script>
<script src="js/jquery-ui.js"></script>
<script src="js/popper.min.js"></script>
<script src="js/bootstrap.min.js"></script>
<script src="js/owl.carousel.min.js"></script>
<script src="js/jquery.stellar.min.js"></script>
<script src="js/jquery.countdown.min.js"></script>
<script src="js/bootstrap-datepicker.min.js"></script>
<script src="js/jquery.easing.1.3.js"></script>
<script src="js/aos.js"></script>
<script src="js/jquery.fancybox.min.js"></script>
<script src="js/jquery.sticky.js"></script>
<script src="js/jquery.mb.YTPlayer.min.js"></script>
<script src="js/main.js"></script>
</body>
</html>