最后更新于:2024年9月16日 23:00
Last updated on: 2024-09-16 23:00
Welcome
👋 Hi! My name is Junyao Hu (胡钧耀).
🔍 My research interests include deep learning and computer vision, particularly focusing on generative AI for creativity.
🥰 You can contact with me in following ways: GitHub / Email (hujunyao0329@gmail.com) / WeChat (ID: LittleDream_hjy, and the QR code is in the picture above). Please feel free to make any suggestions. Any questions and inquiries about my work and study life are welcome.
🎞️ I am operating my Chinese self-media channels, sharing my scientific research and life experience, and bringing useful knowledge to everyone. You can see me on Bilibili (@JunyaoHu) and WeChat Official Account (Vision AIGC).
📃 More details are shown on my CV (EN/CN) page.
News
2024-09-09 ✒️ Study I went back to my hometown, Enshi, to take a gap year. I am trying to adjust my working status and find a new research position that suits me.
2024-05-05 💼 Activity I attended VALSE 2024 conference at Chongqing, China.
2024-02-27 😋 Accepted A paper was accepted to CVPR 2024, and I was honored as one of Outstanding Reviewers.
2023-09-01 ✒️ Study I started my PhD studying at Nankai University (NKU) under the supervision of Prof. Jufeng Yang.
2023-07-15 ✒️ Study I ended my undergraduate life at China University of Mining and Technology (CUMT), thanks to all teachers and friends around me, especially my parents!
2023-06-10 💼 Activity I attended VALSE 2023 conference at Wuxi, China.
Selected Publications
If you want to view my all publications, click here.
Note: #
= Equal Contribution , *
= Corresponding Author.
CVPR24 ExtDM: Distribution extrapolation diffusion model for video prediction
Zhicheng Zhang#, Junyao Hu#, Wentao Cheng*, Danda Paudel, Jufeng Yang
TL;DR: We present ExtDM, a new diffusion model that extrapolates video content from current frames by accurately modeling distribution shifts towards future frames.
📘 CVPR 📃 Paper 📃 中译版 📦 Code ⚒️ Project 📊 Poster 📅 Slide 🎞️ Bilibili 🎞️ YouTube
Details
Abstract: Video prediction is a challenging task due to its nature of uncertainty, especially for forecasting a long period. To model the temporal dynamics, advanced methods benefit from the recent success of diffusion models, and repeatedly refine the predicted future frames with 3D spatiotemporal U-Net. However, there exists a gap between the present and future and the repeated usage of U-Net brings a heavy computation burden. To address this, we propose a diffusion-based video prediction method that predicts future frames by extrapolating the present distribution of features, namely ExtDM. Specifically, our method consists of three components: (i) a motion autoencoder conducts a bijection transformation between video frames and motion cues; (ii) a layered distribution adaptor module extrapolates the present features in the guidance of Gaussian distribution; (iii) a 3D U-Net architecture specialized for jointly fusing guidance and features among the temporal dimension by spatiotemporal-window attention. Extensive experiments on five popular benchmarks covering short- and long-term video prediction verify the effectiveness of ExtDM.BibTex1 |
|
😅 I’m still working … I can still learn …. Zzz … 😴
Projects
- common_metrics_on_video_quality
You can easily calculate FVD, PSNR, SSIM, LPIPS for evaluating the quality of generated or predicted videos. - Tasks_for_Rookies
How to get started with computer vision research. - academic-project-page-template-vue
A project template powered by Vue (in development).
Academic Service
Reviewer
- Conference: CVPR’24 (Outstanding Reviewer) , ACMMM’23
- Transaction: TMM’23
Other
- Community Contributor
- Teaching Assistant
- High School Student Talent Plan 2024: Guided students through the fundamentals of computer science and assisted them in developing innovative projects.
- Discrete Mathematics 2021: Summarized and reinforced core course topics and exam challenges through concise review sessions and targeted problem-solving examples.