본문 바로가기

AI

(105)
[Relevance-CAM 논문 리뷰] - Relevance-CAM: Your Model Already Knows Where to Look *Relevance-CAM를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! Relevance-CAM paper: Relevance-CAM: Your Model Already Knows Where To Look (thecvf.com) Relevance-CAM github: GitHub - mongeoroo/Relevance-CAM: The official code of Relevance-CAM GitHub - mongeoroo/Relevance-CAM: The official code of Relevance-CAM The official code of Relevance-CAM. Contribute to mongeoroo/Relevance-CAM development by creating an..
[Grad-CAM++ 논문 리뷰] - Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks *Grad-CAM++ 논문 리뷰 글입니다. 궁금하신 점은 댓글로 남겨주세요. *수식 많음 주의!!(어렵지는 않아요!) Grad-CAM++ paper: [1710.11063] Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks (arxiv.org) Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision problems. However, these ..
[Grad-CAM 논문 리뷰] - Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization *Grad-CAM 논문 리뷰 글입니다. 궁금하신 점은 댓글로 남겨주세요. Grad-CAM paper: [1610.02391] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (arxiv.org) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent. Our approac..
[CAM 논문 리뷰] - Learning Deep Features for Discriminative Localization *XAI에서 가장 대표적으로 쓰이는 CAM 논문 리뷰입니다. 궁금하신 점은 댓글로 남겨주세요. CAM paper: [1512.04150] Learning Deep Features for Discriminative Localization (arxiv.org) Learning Deep Features for Discriminative Localization In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despit..
[Saliency Map 논문 리뷰] - Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps *eXplainable AI의 기초가 되는 논문입니다. 질문이 있다면 댓글로 남겨주세요. Deep Inside Convolutional Networks paper: [1312.6034] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (arxiv.org) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps This paper addresses the visualisation of image classification models, learnt using deep Convo..
[SSL Swin UNETR 논문 리뷰] - Self-Supervised Pre-Training of Swin Transformersfor 3D Medical Image Analysis *Self-Supervised learning을 이용한 Swin UNETR 논문 리뷰 글입니다. 궁금하신 점은 댓글로 남겨주세요! SSL Swin UNETR paper: [2111.14791] Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis (arxiv.org) Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis Vision Transformers (ViT)s have shown great performance in self-supervised learning of global and local representation..
[Swin UNETR 논문 리뷰] - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images *해당 글은 Swin UNETR 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요. Swin UNETR: [2201.01266] Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images (arxiv.org) Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can as..
[CLIP-NeRF 논문 리뷰] - CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields *해당 글은 CLIP-NeRF 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요! CLIP-NeRF paper: [2112.05139] CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields (arxiv.org) CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields We present CLIP-NeRF, a multi-modal 3D object manipulation method for neural radiance fields (NeRF). By leveraging the joint language-image embedding space of t..
[ViT for NeRF 논문 리뷰] - Vision Transformer for NeRF-Based View Synthesis from a Single Input Image *해당논문은 Vision Transformer for NeRF를 위한 논문 리뷰 글입니다! 궁금한 점은 댓글로 남겨주세요! Vision Transformer for NeRF paper: [2207.05736] Vision Transformer for NeRF-Based View Synthesis from a Single Input Image (arxiv.org) Vision Transformer for NeRF-Based View Synthesis from a Single Input Image Although neural radiance fields (NeRF) have shown impressive advances for novel view synthesis, most methods typically ..
[VDE 논문 리뷰] - Vehicle Distance Estimation from a Monocular Camera for Advanced Driver Assistance Systems *VDE 논문 리뷰를 위한 글 입니다! 궁금하신 점이 있다면 댓글로 달아주세요! *이 논문에서는 Self-driving car에서 Distance estimation을 수행하는 딥러닝을 제시합니다. VDE(ODD) paper: https://www.mdpi.com/2073-8994/14/12/2657 Vehicle Distance Estimation from a Monocular Camera for Advanced Driver Assistance Systems The purpose of this study is to propose a framework for accurate and efficient vehicle distance estimation from a monocular camera. The pr..
[GLPDepth 논문 리뷰] - Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth *GLPDepth 논문 리뷰를 위한 글입니다! 궁금한 점이 있다면 댓글로 질문주세요! GLPDepth paper: [2201.07436] Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth (arxiv.org) Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the ..
[DETR 논문 리뷰] - End-to-End Object Detection with Transformers *DETR 논문 리뷰를 위한 글입니다! 궁금하신 점이 있다면 댓글로 남겨주세요. DETR paper: [2005.12872] End-to-End Object Detection with Transformers (arxiv.org) End-to-End Object Detection with Transformers We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum supp..

반응형