avatar
Junyao Hu (胡钧耀)
理想不大,路还很远。

最后更新于:2025年01月02日 20:00
Last updated on: 2025-01-02 20:00

Welcome

👋 Hi! My name is Junyao Hu (胡钧耀).

🔍 My research interests include deep learning and computer vision, particularly focusing on generative AI for creativity.

🥰 Please feel free to make any suggestions. Any questions and inquiries about my work and study life are welcome. You can contact with me in following ways.

  • GitHub (JunyaoHu)
  • Email (hujunyao0329 AT gmail DOT com)
  • WeChat (ID: LittleDream_hjy, and you can see the QR code when floating on the WeChat icon)

📃 More details:

News

2024-09-09 ✒️ Study I quit my PhD and went back to my hometown, Enshi, to take my gap year. I am trying to think about the meaning of life and find the road that suits me. I would like to express my gratitude to my supervisor and coworkers for their encouragement and guidance throughout this year.

2024-05-05 💼 Activity I attended VALSE 2024 (Chongqing, China).

2024-02-27 😋 Accepted A paper was accepted to CVPR 2024, and I was honored as one of Outstanding Reviewers (top 2%).

2023-09-01 ✒️ Study I started my PhD studying at Nankai University (NKU) under the supervision of Prof. Jufeng Yang.

2023-07-15 ✒️ Study I ended my undergraduate life at China University of Mining and Technology (CUMT), thanks to all teachers and friends around me, especially my parents.

2023-06-10 💼 Activity I attended VALSE 2023 (Wuxi, China).

Selected Publications

If you want to view my all publications, click here.

Note: # = Equal Contribution , * = Corresponding Author.

CVPR24 ExtDM: Distribution extrapolation diffusion model for video prediction
Zhicheng Zhang#, Junyao Hu#, Wentao Cheng*, Danda Paudel, Jufeng Yang

TL;DR: We present ExtDM, a new diffusion model that extrapolates video content from current frames by accurately modeling distribution shifts towards future frames.

📘 CVPR 📃 Paper 📃 中译版 📦 Code ⚒️ Project 📊 Poster 📅 Slide 🎞️ Bilibili 🎞️ YouTube

Details
Abstract: Video prediction is a challenging task due to its nature of uncertainty, especially for forecasting a long period. To model the temporal dynamics, advanced methods benefit from the recent success of diffusion models, and repeatedly refine the predicted future frames with 3D spatiotemporal U-Net. However, there exists a gap between the present and future and the repeated usage of U-Net brings a heavy computation burden. To address this, we propose a diffusion-based video prediction method that predicts future frames by extrapolating the present distribution of features, namely ExtDM. Specifically, our method consists of three components: (i) a motion autoencoder conducts a bijection transformation between video frames and motion cues; (ii) a layered distribution adaptor module extrapolates the present features in the guidance of Gaussian distribution; (iii) a 3D U-Net architecture specialized for jointly fusing guidance and features among the temporal dimension by spatiotemporal-window attention. Extensive experiments on five popular benchmarks covering short- and long-term video prediction verify the effectiveness of ExtDM.

BibTex
1
2
3
4
5
6
@inproceedings{zhang2024ExtDM,
title={ExtDM: Distribution Extrapolation Diffusion Model for Video Prediction},
author={Zhang, Zhicheng and Hu, Junyao and Cheng, Wentao and Paudel, Danda and Yang, Jufeng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2024}
}

Academic Service

Projects

Reviewer

Teaching Assistant

Community Contributor

  • SmartFlow: Authored articles for the account, delivering comprehensive analyses and interpretations of CVPR Best Paper.
  • Datawhale: Served as promotion ambassador, promoted team learning activities and competitions to new learners.