본문 바로가기

리뷰

(5)
[LGM 논문 리뷰] Large Multi-View Gaussian Model for High-Resolution 3D Content Creation *LGM를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! LGM github: LGM (kiui.moe)  LGMLGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation Arxiv 2024 Jiaxiang Tang1, Zhaoxi Chen2, Xiaokang Chen1, Tengfei Wang3, Gang Zeng1, Ziwei Liu2 1 Peking University   2 S-Lab, Nanyang Technological University   3 Shanghai AI Lame.kiui.moeContents1. Simple Introduction2. Background Knowledge: Gaussia..
[Instant-NGP 논문 리뷰] - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding *이 글의 목표: Hash-encoding 완전 이해하기!!! (부셔버려!!) *Instant-NGP를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! Instant-NGP paper: nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf Instant-NGP github: GitHub - NVlabs/instant-ngp: Instant neural graphics primitives: lightning fast NeRF and more GitHub - NVlabs/instant-ngp: Instant neural graphics primitives: lightning fast NeRF and more Instant neural graph..
[Instant-stylization-NeRF 논문 리뷰] - Instant Neural Radiance Fields Stylization *Instant Neural Radiance Fields Stylization를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! Instant Neural Radiance Fields Stylization paper: [2303.16884] Instant Neural Radiance Fields Stylization (arxiv.org) Instant Neural Radiance Fields Stylization We present Instant Neural Radiance Fields Stylization, a novel approach for multi-view image stylization for the 3D scene. Our approach models a neural radian..
[LoRA 논문 리뷰] - LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS *LoRA를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! LoRA paper: https://arxiv.org/abs/2106.09685 LoRA: Low-Rank Adaptation of Large Language Models An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes le arxi..
[ChatGPT 리뷰] - GPT와 Reinforcement Learning Human Feedback *ChatGPT에 대해서 설명하는 글입니다! 궁금하신 점은 댓글로 남겨주세요! InstructGPT: https://openai.com/research/instruction-following#guide Aligning language models to follow instructions We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment research. These InstructGPT models, which ar..

반응형