AI (108) 썸네일형 리스트형 Python argparse action='store_true'의 의미 # NeRF code def config_parser(): import argparse parser = argparse.ArgumentParser() parser.add_argument("--render_only", action='store_true', help='do not optimize, reload weights and render out render_poses path') parser.add_argument("--render_test", action='store_true', help='render the test set instead of render_poses path') return parser 요즘 github에 보면 argparse를 이용해서 인자를 받고 training 시키는 형태는 수.. [StylizedNeRF 논문 리뷰] - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D Mutual Learning *StylizedNeRF를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! StylizedNeRF project page: StylizedNeRF (geometrylearning.com) StylizedNeRF StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D Mutual Learning Yi-Hua Huang1,2 Yue He1,2 Yu-Jie Yuan1,2 Yu-Kun Lai3 Lin Gao1,2† --> † Corresponding author 1 Institute of Computing Technology, Chinese Ac geometrylearning.com StylizedNeRF github: Gi.. [AdaIN 논문 리뷰] - Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization *AdaIN을 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! AdaIN paper: [1703.06868] Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization (arxiv.org) Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their frame.. [Relevance-CAM 논문 리뷰] - Relevance-CAM: Your Model Already Knows Where to Look *Relevance-CAM를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! Relevance-CAM paper: Relevance-CAM: Your Model Already Knows Where To Look (thecvf.com) Relevance-CAM github: GitHub - mongeoroo/Relevance-CAM: The official code of Relevance-CAM GitHub - mongeoroo/Relevance-CAM: The official code of Relevance-CAM The official code of Relevance-CAM. Contribute to mongeoroo/Relevance-CAM development by creating an.. [Grad-CAM++ 논문 리뷰] - Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks *Grad-CAM++ 논문 리뷰 글입니다. 궁금하신 점은 댓글로 남겨주세요. *수식 많음 주의!!(어렵지는 않아요!) Grad-CAM++ paper: [1710.11063] Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks (arxiv.org) Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision problems. However, these .. [Grad-CAM 논문 리뷰] - Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization *Grad-CAM 논문 리뷰 글입니다. 궁금하신 점은 댓글로 남겨주세요. Grad-CAM paper: [1610.02391] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (arxiv.org) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent. Our approac.. [CAM 논문 리뷰] - Learning Deep Features for Discriminative Localization *XAI에서 가장 대표적으로 쓰이는 CAM 논문 리뷰입니다. 궁금하신 점은 댓글로 남겨주세요. CAM paper: [1512.04150] Learning Deep Features for Discriminative Localization (arxiv.org) Learning Deep Features for Discriminative Localization In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despit.. [Saliency Map 논문 리뷰] - Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps *eXplainable AI의 기초가 되는 논문입니다. 질문이 있다면 댓글로 남겨주세요. Deep Inside Convolutional Networks paper: [1312.6034] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (arxiv.org) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps This paper addresses the visualisation of image classification models, learnt using deep Convo.. [SSL Swin UNETR 논문 리뷰] - Self-Supervised Pre-Training of Swin Transformersfor 3D Medical Image Analysis *Self-Supervised learning을 이용한 Swin UNETR 논문 리뷰 글입니다. 궁금하신 점은 댓글로 남겨주세요! SSL Swin UNETR paper: [2111.14791] Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis (arxiv.org) Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis Vision Transformers (ViT)s have shown great performance in self-supervised learning of global and local representation.. [Swin UNETR 논문 리뷰] - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images *해당 글은 Swin UNETR 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요. Swin UNETR: [2201.01266] Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images (arxiv.org) Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can as.. [CLIP-NeRF 논문 리뷰] - CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields *해당 글은 CLIP-NeRF 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요! CLIP-NeRF paper: [2112.05139] CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields (arxiv.org) CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields We present CLIP-NeRF, a multi-modal 3D object manipulation method for neural radiance fields (NeRF). By leveraging the joint language-image embedding space of t.. [ViT for NeRF 논문 리뷰] - Vision Transformer for NeRF-Based View Synthesis from a Single Input Image *해당논문은 Vision Transformer for NeRF를 위한 논문 리뷰 글입니다! 궁금한 점은 댓글로 남겨주세요! Vision Transformer for NeRF paper: [2207.05736] Vision Transformer for NeRF-Based View Synthesis from a Single Input Image (arxiv.org) Vision Transformer for NeRF-Based View Synthesis from a Single Input Image Although neural radiance fields (NeRF) have shown impressive advances for novel view synthesis, most methods typically .. 이전 1 ··· 4 5 6 7 8 9 다음