본문 바로가기

video

(4)
[SORA 설명] - OpenAI의 Video Generation AI (기술부분 번역 + 설명 이미지 추가) Technical Report: Video generation models as world simulators (openai.com) Video generation models as world simulators We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that oper openai.com SORA: https..
[Tune-A-VideKO] - 한국어 기반 One-shot Tuning of diffusion for Text-to-Video 모델 Github: https://github.com/KyujinHan/Tune-A-VideKO/tree/master GitHub - KyujinHan/Tune-A-VideKO: 한국어 기반 One-shot video tuning with Stable Diffusion 한국어 기반 One-shot video tuning with Stable Diffusion - GitHub - KyujinHan/Tune-A-VideKO: 한국어 기반 One-shot video tuning with Stable Diffusion github.com Tune-A-VideKO-v1-5🏄: https://huggingface.co/kyujinpy/Tune-A-VideKO-v1-5 kyujinpy/Tune-A-VideKO-v1-5 ·..
[Tune-A-Video 논문 리뷰] One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation *Tune-A-Video를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! Tune-A-Video paper: [2212.11565] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation (arxiv.org) Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation To replicate the success of text-to-image (T2I) generation, recent works employ large-scale video datasets to train a text-to-video (T..
[MCCNet 논문 리뷰] - Arbitrary Video Style Transfer via Multi-Channel Correlation *MCCNet를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! MCCNet paper: [2009.08003] Arbitrary Video Style Transfer via Multi-Channel Correlation (arxiv.org) Arbitrary Video Style Transfer via Multi-Channel Correlation Video style transfer is getting more attention in AI community for its numerous applications such as augmented reality and animation productions. Compared with traditional image style transfer, ..

반응형