본문 바로가기

논문 리뷰

(35)
[Saliency Map 논문 리뷰] - Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps *eXplainable AI의 기초가 되는 논문입니다. 질문이 있다면 댓글로 남겨주세요. Deep Inside Convolutional Networks paper: [1312.6034] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (arxiv.org) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps This paper addresses the visualisation of image classification models, learnt using deep Convo..
[Swin UNETR 논문 리뷰] - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images *해당 글은 Swin UNETR 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요. Swin UNETR: [2201.01266] Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images (arxiv.org) Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can as..
[CLIP-NeRF 논문 리뷰] - CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields *해당 글은 CLIP-NeRF 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요! CLIP-NeRF paper: [2112.05139] CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields (arxiv.org) CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields We present CLIP-NeRF, a multi-modal 3D object manipulation method for neural radiance fields (NeRF). By leveraging the joint language-image embedding space of t..
[ViT for NeRF 논문 리뷰] - Vision Transformer for NeRF-Based View Synthesis from a Single Input Image *해당논문은 Vision Transformer for NeRF를 위한 논문 리뷰 글입니다! 궁금한 점은 댓글로 남겨주세요! Vision Transformer for NeRF paper: [2207.05736] Vision Transformer for NeRF-Based View Synthesis from a Single Input Image (arxiv.org) Vision Transformer for NeRF-Based View Synthesis from a Single Input Image Although neural radiance fields (NeRF) have shown impressive advances for novel view synthesis, most methods typically ..
[VDE 논문 리뷰] - Vehicle Distance Estimation from a Monocular Camera for Advanced Driver Assistance Systems *VDE 논문 리뷰를 위한 글 입니다! 궁금하신 점이 있다면 댓글로 달아주세요! *이 논문에서는 Self-driving car에서 Distance estimation을 수행하는 딥러닝을 제시합니다. VDE(ODD) paper: https://www.mdpi.com/2073-8994/14/12/2657 Vehicle Distance Estimation from a Monocular Camera for Advanced Driver Assistance Systems The purpose of this study is to propose a framework for accurate and efficient vehicle distance estimation from a monocular camera. The pr..
[GLPDepth 논문 리뷰] - Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth *GLPDepth 논문 리뷰를 위한 글입니다! 궁금한 점이 있다면 댓글로 질문주세요! GLPDepth paper: [2201.07436] Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth (arxiv.org) Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the ..
[DETR 논문 리뷰] - End-to-End Object Detection with Transformers *DETR 논문 리뷰를 위한 글입니다! 궁금하신 점이 있다면 댓글로 남겨주세요. DETR paper: [2005.12872] End-to-End Object Detection with Transformers (arxiv.org) End-to-End Object Detection with Transformers We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum supp..
[NeRF-Art 논문 리뷰] - Text-Driven Neural Radiance Fields Stylization *NeRF-Art 논문 리뷰 글입니다! 궁금하신 점이 있다면 댓글로 남겨주세요! NeRF-Art paper: [2212.08070] NeRF-Art: Text-Driven Neural Radiance Fields Stylization (arxiv.org) NeRF-Art: Text-Driven Neural Radiance Fields Stylization As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially..
[NeRF++ 논문 리뷰] - NERF++: ANALYZING AND IMPROVING NEURAL RADIANCE FIELDS *NeRF++ 논문 리뷰 글입니다! 질문 사항이 있다면 댓글로 남겨주시길 바랍니다. *기본적으로 난이도가 있는 논문이기에, NeRF를 이해하지 못하셨다면 힘드실 것으로 예상됩니다. NeRF++ paper: [2010.07492] NeRF++: Analyzing and Improving Neural Radiance Fields (arxiv.org) NeRF++: Analyzing and Improving Neural Radiance Fields Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360 capture of bounded scenes a..
[CLIP 논문 리뷰] - Learning Transferable Visual Models From Natural Language Supervision *CLIP 논문 리뷰를 위한 글입니다. 질문이 있다면 댓글로 남겨주시길 바랍니다! CLIP paper: [2103.00020] Learning Transferable Visual Models From Natural Language Supervision (arxiv.org) Learning Transferable Visual Models From Natural Language Supervision State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and..
[BYOL 논문 리뷰] - Bootstrap your own latent: A new approach to self-supervised Learning *BYOL 논문 리뷰를 코드와 같이 분석한 글입니다! SSL 입문하시는 분들께 도움이 되길 원하며 궁금한 점은 댓글로 남겨주세요. *BYOL는 Non-contrastive learning입니다. BYOL paper: https://arxiv.org/abs/2006.07733 Bootstrap your own latent: A new approach to self-supervised Learning We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and ..
[MoCo 논문 리뷰] - Momentum Contrast for Unsupervised Visual Representation Learning *MoCo 논문 리뷰를 코드와 같이 분석한 글입니다! SSL 입문하시는 분들께 도움이 되길 원하며 궁금한 점은 댓글로 남겨주세요. *SSL(Self-Supervised-Learning) 중 contrastive learning을 위주로 다룹니다! Moco paper: [1911.05722] Momentum Contrast for Unsupervised Visual Representation Learning (arxiv.org) Momentum Contrast for Unsupervised Visual Representation Learning We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a..

반응형