-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathindex.html
More file actions
346 lines (330 loc) · 22.1 KB
/
index.html
File metadata and controls
346 lines (330 loc) · 22.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Jeff Liang</title>
<meta name="author" content="Feng (Jeff) Liang">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/ut_icon.png">
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Feng (Jeff) Liang, 梁丰 </name>
</p>
<p>
I am a Research Scientist at Meta AI. I obtained my PhD from <a href="https://www.utexas.edu/">UT Austin</a> with <a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu">Prof. Diana Marculescu</a>.
Prior to that, I completed my master's at <a href="https://www.tsinghua.edu.cn/en/index.htm">Tsinghua University</a> and my bachelor's at <a href="http://english.hust.edu.cn/index.htm">Huazhong University of Science and Technology</a>.
</p>
<p>
I’m interested in building efficient, personalized, and creative AI for everyone.
My key research directions include:
</p>
<ul class="research-list">
<li><strong>Efficient AI:</strong> Scalable learning/inference under data, compute, and energy constraints.</li>
<li><strong>Personalized AI:</strong> Developing privacy-preserving, user- and context-adaptive models.</li>
<li><strong>AI for Creativity:</strong> Empowering human creativity via controllable multimodal generation and understanding.</li>
</ul>
<p style="text-align:center">
<a href="mailto:jeffliang@utexas.edu">Email</a>  / 
<a href="data/CV_JeffLiang.pdf">CV</a>  / 
<a href="https://scholar.google.com/citations?user=ecTFCUMAAAAJ&hl=en">Google Scholar</a>  / 
<a href="https://www.linkedin.com/in/feng-liang-854a30150/">Linkedin</a>  / 
<a href="https://www.zhihu.com/people/liang-feng-53">Zhihu</a>  / 
<a href="https://twitter.com/LiangJeff95">Twitter</a>
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/jeff_phd.jpg"><img style="width:80%;max-width:80%" alt="profile photo" src="images/jeff_phd.jpg" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>News</heading>
<div style="width:100%;overflow-y:scroll; height:230px;">
<ul>
<li style="line-height:30px"> <b>May 2025:</b> Checkout our <a href="https://arxiv.org/abs/2505.18521">Improved Immiscible Diffusion</a>, speeds up diffusion training 4×+ across models and tasks!</li>
<li style="line-height:30px"> <b>April 2025:</b> I started my full-time as a Research Scientist at Meta!</li>
<li style="line-height:30px"> <b>February 2025:</b> One paper (<a href="https://jeff-liangf.github.io/projects/movieweaver/">Movie Weaver</a>) gets accepted to CVPR 2025!</li>
<li style="line-height:30px"> <b>January 2025:</b> One paper (<a href="https://jeff-liangf.github.io/projects/streamv2v/">StreamV2V</a>) gets accepted to ICLR 2025!</li>
<li style="line-height:30px"> <b>December 2024:</b> Checkout <a href="https://jeff-liangf.github.io/projects/movieweaver/">Movie Weaver</a>, extending MovieGen to multi-concept personalization!</li>
<li style="line-height:30px"> <b>October 2024:</b> Checkout <a href="https://ai.meta.com/research/movie-gen/">MovieGen</a>. Super excited to work with the team to push the boundary of (personalized) video generation!</li>
<li style="line-height:30px"> <b>May 2024:</b> Checkout our <a href="https://jeff-liangf.github.io/projects/streamv2v/">StreamV2V</a> with <a href="https://github.com/Jeff-LiangF/streamv2v">code&demo</a>!</li>
<li style="line-height:30px"> <b>May 2024:</b> Honored to have been chosen as <a href="https://mlcommons.org/2024/06/2024-mlc-rising-stars/">2024 MLCommons ML and Systems Rising Stars</a>!</li>
<li style="line-height:30px"> <b>February 2024:</b> Two papers (<a href="https://jeff-liangf.github.io/projects/flowvid/">FlowVid</a> and <a href="https://fairy-video2video.github.io/">Fairy</a>) get accepted to CVPR 2024!</li>
<li style="line-height:30px"> <b>January 2024:</b> After being rejected four times, <a href="https://arxiv.org/abs/2205.14540">Supervised MAE (SupMAE)</a> finally gets accepted in AAAI Edge Intelligence Workshop (EIW) 2024 with <b>Best Poster Award</b>!</li>
<li style="line-height:30px"> <b>December 2023:</b> Checkout our video-to-video synthesis work <a href="https://jeff-liangf.github.io/projects/flowvid/">FlowVid</a> and instruction-based <a href="https://fairy-video2video.github.io/">Fairy</a>!</li>
<li style="line-height:30px"> <b>March 2023:</b> I will intern at Meta Gen AI this summer, fortunate to work with <a href="https://scholar.google.com/citations?user=K3QJPdMAAAAJ&hl=en">Dr. Bichen Wu</a>, again!</li>
<li style="line-height:30px"> <b>February 2023:</b> One paper gets accepted to CVPR 2023!</li>
<li style="line-height:30px"> <b>November 2022:</b> Checkout our <a href="https://arxiv.org/abs/2210.04150">Open-vocabulary Segmentation (OVSeg)</a> with <a href="https://github.com/facebookresearch/ov-seg">codes</a> and <a href="https://huggingface.co/spaces/facebook/ov-seg">demo</a>!</li>
<li style="line-height:30px"> <b>August 2022:</b> Checkout our <a href="https://arxiv.org/abs/2205.14540">Supervised MAE (SupMAE)</a> with <a href="https://github.com/enyac-group/supmae">codes&models</a>!</li>
<li style="line-height:30px"> <b>June 2022:</b> Three papers get accepted to ICML workshops 2022!</li>
<li style="line-height:30px"> <b>April 2022:</b> One paper gets accepted to IJCAI 2022 as <b>long oral</b>!</li>
<li style="line-height:30px"> <b>March 2022:</b> One paper gets accepted to CVPRW ECV 2022!</li>
<li style="line-height:30px"> <b>February 2022:</b> I will intern at <a href="https://about.facebook.com/realitylabs/">Meta Reality Labs</a> this summer, fortunate to work with <a href="https://scholar.google.com/citations?user=K3QJPdMAAAAJ&hl=en">Dr. Bichen Wu</a>!</li>
<li style="line-height:30px"> <b>January 2022:</b> One paper gets accepted to ICLR 2022!</li>
<li style="line-height:30px"> <b>October 2021:</b> Checkout our <a href="https://arxiv.org/abs/2110.05208">Data efficient CLIP (DeCLIP)</a> with <a href="https://github.com/Sense-GVT/DeCLIP">codes&models</a>!</li>
<li style="line-height:30px"> <b>July 2021:</b> One paper gets accepted to ICCV 2021!</li>
<li style="line-height:30px"> <b>April 2021:</b> I am granted UT Austin Engineering Fellowship!</li>
<ul>
</div>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Selected Publications</heading>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/movie_weaver.png" alt=" movie_weaver" width="280" height="160" style="border-style: none">
</td>
<td width="75%" valign="middle">
<a href="">
<papertitle>Movie Weaver: Tuning-Free Multi-Concept Video Personalization with Anchored Prompts</papertitle>
</a>
<br>
<strong>Feng Liang</strong>,
<a href="https://howiema.github.io/">Haoyu Ma</a>,
<a href="https://zechenghe.github.io/">Zecheng He</a>,
<a href="https://linkedin.com/in/tingbo-hou/">Tingbo Hou</a>,
<a href="https://sekunde.github.io/">Ji Hou</a>,
<a href="https://kunpengli1994.github.io/">Kunpeng Li</a>,
<a href="https://scholar.google.com/citations?user=u4olrOcAAAAJ&hl=en">Xiaoliang Dai</a>,
<a href="https://xujuefei.com/">Felix Juefei-Xu</a>,
<a href="https://scholar.google.com/citations?user=X0EXfT8AAAAJ&hl=en">Samaneh Azadi</a>,
<a href="https://www.linkedin.com/in/animeshsinha11/">Animesh Sinha</a>,
<a href="https://www.linkedin.com/in/peizhao-zhang-14846042/">Peizhao Zhang</a>,
<a href="https://sites.google.com/site/vajdap">Peter Vajda</a>,
<a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu">Diana Marculescu</a>
<br>
<em>CVPR 2025</em>
<br>
<a href="https://jeff-liangf.github.io/projects/movieweaver/">project page</a>,
<a href="https://arxiv.org/abs/2502.07802">arxiv</a>,
<!-- <a href="">code</a>, -->
<!-- <a href="">Huggingface demo</a>, -->
<p></p>
<p>We present Movie Weaver to support multi-concept video personalization.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/streamv2v.png" alt="streamv2v" width="280" height="160" style="border-style: none">
</td>
<td width="75%" valign="middle">
<a href="https://arxiv.org/abs/2405.15757">
<papertitle>Looking Backward: Streaming Video-to-Video Translation with Feature Banks</papertitle>
</a>
<br>
<strong>Feng Liang</strong>,
<a href="https://scholar.google.co.jp/citations?user=15X3cioAAAAJ&hl=ja">Akio Kodaira</a>,
<a href="https://www.chenfengx.com/">Chenfeng Xu</a>,
<a href="https://me.berkeley.edu/people/masayoshi-tomizuka/">Masayoshi Tomizuka</a>,
<a href="https://people.eecs.berkeley.edu/~keutzer/">Kurt Keutzer</a>,
<a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu">Diana Marculescu</a>
<br>
<em>ICLR 2025</em>
<br>
<a href="https://jeff-liangf.github.io/projects/streamv2v/">project page</a>,
<a href="https://arxiv.org/abs/2405.15757">arxiv</a>,
<a href="https://github.com/Jeff-LiangF/streamv2v">code</a>,
<a href="https://huggingface.co/spaces/JeffLiang/streamv2v">Huggingface demo</a>,
<!-- <a href="https://www.youtube.com/watch?v=y5IlgGl8Y24">5min video</a>, -->
<a href="https://www.youtube.com/watch?v=uLXtpFVrtP4">Talk at Realtime Video AI Summit 2025</a>,
<p></p>
<p>We present StreamV2V to support real-time video-to-video translation for streaming input.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/flowvid.png" alt="ovseg" width="280" height="160" style="border-style: none">
</td>
<td width="75%" valign="middle">
<a href="https://arxiv.org/abs/2312.17681">
<papertitle>FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis</papertitle>
</a>
<br>
<strong>Feng Liang</strong>,
<a href="https://www.linkedin.com/in/bichenwu">Bichen Wu</a>,
<a href="https://scholar.google.com/citations?user=R8vOkZYAAAAJ&hl=en">Jialiang Wang</a>,
<a href="https://lichengunc.github.io/">Licheng Yu</a>,
<a href="https://kunpengli1994.github.io/">Kunpeng Li</a>,
<a href="https://yinan-zhao.github.io/">Yinan Zhao</a>,
<a href="https://imisra.github.io/">Ishan Misra</a>,
<a href="https://jbhuang0604.github.io/">Jia-Bin Huang</a>,
<a href="https://www.linkedin.com/in/peizhao-zhang-14846042/">Peizhao Zhang</a>,
<a href="https://sites.google.com/site/vajdap">Peter Vajda</a>,
<a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu">Diana Marculescu</a>
<br>
<em>CVPR</em>, 2024, <b>Highlight</b>
<br>
<a href="https://jeff-liangf.github.io/projects/flowvid/">project page</a>,
<a href="https://arxiv.org/abs/2312.17681">arxiv</a>,
<!--<a href="https://github.com/facebookresearch/ov-seg">videos</a>,-->
<!--<a href="https://huggingface.co/spaces/facebook/ov-seg">Huggingface demo</a>,-->
<a href="https://www.youtube.com/watch?v=y5IlgGl8Y24">5min video</a>,
<!--<a href="https://wqpoq.h5.xeknow.com/sl/2sFndr">1hour talk (chinese)</a>,-->
<p></p>
<p>We leverage the temporal optical flow clue within video to enhance the temporal consistency for text guided video-to-video synthesis.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/ovseg.png" alt="ovseg" width="280" height="160" style="border-style: none">
</td>
<td width="75%" valign="middle">
<a href="https://arxiv.org/abs/2210.04150">
<papertitle>Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP</papertitle>
</a>
<br>
<strong>Feng Liang</strong>,
<a href="https://www.linkedin.com/in/bichenwu">Bichen Wu</a>,
<a href="https://sites.google.com/view/xiaoliangdai/">Xiaoliang Dai</a>,
<a href="https://kunpengli1994.github.io/">Kunpeng Li</a>,
<a href="https://yinan-zhao.github.io/">Yinan Zhao</a>,
<a href="https://hangzhang.org/">Hang Zhang</a>,
<a href="https://www.linkedin.com/in/peizhao-zhang-14846042/">Peizhao Zhang</a>,
<a href="https://sites.google.com/site/vajdap">Peter Vajda</a>,
<a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu">Diana Marculescu</a>
<br>
<em>CVPR</em>, 2023
<br>
<a href="https://jeff-liangf.github.io/projects/ovseg/">project page</a>,
<a href="https://arxiv.org/abs/2210.04150">arxiv</a>,
<a href="https://github.com/facebookresearch/ov-seg">code</a>,
<a href="https://huggingface.co/spaces/facebook/ov-seg">Huggingface demo</a>,
<a href="https://www.youtube.com/watch?v=xIUSG0pLNyo">7min video</a>,
<a href="https://wqpoq.h5.xeknow.com/sl/2sFndr">1hour talk (chinese)</a>,
<p></p>
<p>For the first time, we show open-vocabulary generalist models match the performance of supervised specialist models without dataset-specific adaptations.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/supmae.png" alt="supmae" width="280" height="160" style="border-style: none">
</td>
<td width="75%" valign="middle">
<a href="https://arxiv.org/abs/2205.14540">
<papertitle>SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners</papertitle>
</a>
<br>
<strong>Feng Liang</strong>,
<a href="https://scholar.google.com/citations?user=a7AMvgkAAAAJ">Yangguang Li</a>,
<a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu">Diana Marculescu</a>
<br>
<em>AAAI EIW</em>, 2024, <b>Best Poster Award</b>
<br>
<a href="https://arxiv.org/abs/2205.14540">arxiv</a>,
<a href="https://github.com/enyac-group/supmae">code</a>,
<a href="./data/supmae_best_poster_award_EIW_2024.pdf">award</a>
<p></p>
<p>SupMAE extends MAE to a fully-supervised setting by adding a supervised classification branch, thereby enabling MAE to effectively learn global features from golden labels.</p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/declip.png" alt="declip" width="280" height="160" style="border-style: none">
</td>
<td width="75%" valign="middle">
<a href="https://arxiv.org/abs/2110.05208">
<papertitle>Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm</papertitle>
</a>
<br>
<a href="https://yg256li.github.io/">Yangguang Li*</a>,
<strong>Feng Liang*</strong>,
<a href="https://openreview.net/profile?id=~Lichen_Zhao1">Lichen Zhao*</a>,
<a href="">Yufeng Cui</a>,
<a href="https://wlouyang.github.io/">Wanli Ouyang</a>
<a href="https://amandajshao.github.io/">Jing Shao</a>,
<a href="https://forwil.xyz/">Fengwei Yu</a>,
<a href="https://yan-junjie.github.io/">Junjie Yan</a>
<br>
<em>ICLR</em>, 2022
<br>
<a href="https://arxiv.org/abs/2110.05208">arxiv</a>,
<a href="data/declip.bib">bibtex</a>,
<a href="https://github.com/Sense-GVT/DeCLIP">code</a>,
<a href="https://recorder-v3.slideslive.com/#/share?share=62378&s=d93b81b1-d9de-42b7-9437-8acc34fbf94e">video presentation</a>
<p></p>
<p>We propose Data efficient CLIP (DeCLIP), a method to efficiently train CLIP via utilizing the widespread supervision among the image-text data.</p>
</td>
</tr>
</tbody></table>
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>
<tr>
<td>
<heading>Selected Honors</heading>
<ul>
<li style="line-height:30px"> MLCommons ML and Systems Rising Stars by MLCommons 2024.</li>
<li style="line-height:30px"> Qualcomm Innovation Fellowship Finalist by Qualcomm 2024.</li>
<li style="line-height:30px"> UT Austin Engineering Fellowship by UT Austin, 2021 & 2023.</li>
<li style="line-height:30px"> Excellent Student Leader by Tsinghua University, 2018.</li>
<li style="line-height:30px"> National Scholarship by Ministry of Education of China, 2014 & 2015.</li>
</td>
</tr>
</tbody></table>
<!-- Mentoring -->
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>
<tr>
<td>
<heading>Mentoring</heading>
<p style="margin: 8px 0 6px;">
I’m fortunate to have worked with these talented students and collaborators:
</p>
<ul>
<li style="line-height:30px">
<a href="https://scholar.google.com/citations?hl=en&user=W6CZltIAAAAJ&view_op=list_works&sortby=pubdate" target="_blank" rel="noopener"><b>Yang Zhou</b></a>
— Undergraduate @ UT Austin → Ph.D. student @ CMU ECE (current).
</li>
<li style="line-height:30px">
<a href="https://graceyekim.github.io/" target="_blank" rel="noopener"><b>Grace Kim</b></a>
— Undergraduate @ UT Austin → Ph.D. student @ UPenn CIS (current).
</li>
<li style="line-height:30px">
<a href="https://scholar.google.com/citations?user=pudYRAUAAAAJ&hl=en" target="_blank" rel="noopener"><b>Dennis Menn</b></a>
— Ph.D. student @ UT Austin ECE (current).
</li>
</ul>
</td>
</tr>
</tbody></table>
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20"><tbody>
<tr>
<td>
<heading>Service</heading>
<p>
<li style="line-height:30px"> Reviewer of Journals: TPAMI, IJCV, TNNLS</li>
<li style="line-height:30px"> Reviewer of Conferences: CVPR 2023/2024/2025, ICCV 2023, NeurIPS 2023/2024, ICLR 2024/2025, ECCV 2024, ICML 2025</li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<a href="https://clustrmaps.com/site/1bhpp" title="Visit tracker">
<img src="//www.clustrmaps.com/map_v2.png?d=6oa3ivKJIw5Vmqg_fFtgZxTmVsyrTJMJ_XKxZlDEsRI&cl=ffffff">
</a>
<p style="text-align:right;font-size:small;">
Thanks to <a href="https://github.com/jonbarron/jonbarron_website">Jon Barron</a>
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>