본문 바로가기

decoder

(5)
[GPT-1 논문 리뷰] - Improving Language Understanding by Generative Pre-Training *GPT-1를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! (학기중이라 블로그를 자주 못 쓰는데.. 나중에 시간되면 ChatGPT도 정리해서 올릴께요. 일단 간단한 GPT부터..ㅎㅎ) GPT-1 paper: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf Contents 1. Simple Introduction 2. Background Knowledge: Transformer 3. Method - Unsupervised Stage - Supervised Stage 4. Result Simple Introduction 최근에 ..
[AdaIN 논문 리뷰] - Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization *AdaIN을 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! AdaIN paper: [1703.06868] Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization (arxiv.org) Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their frame..
[Swin UNETR 논문 리뷰] - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images *해당 글은 Swin UNETR 논문 리뷰를 위한 글입니다. 궁금하신 점은 댓글로 남겨주세요. Swin UNETR: [2201.01266] Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images (arxiv.org) Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can as..
[DETR 논문 리뷰] - End-to-End Object Detection with Transformers *DETR 논문 리뷰를 위한 글입니다! 궁금하신 점이 있다면 댓글로 남겨주세요. DETR paper: [2005.12872] End-to-End Object Detection with Transformers (arxiv.org) End-to-End Object Detection with Transformers We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum supp..
[Transformer 논문 리뷰] - Attention is All You Need (2017) *Transformer 논문 리뷰를 위한 글이고, 질문이 있으시다면 언제든지 댓글로 남겨주세요! Transformer paper: https://arxiv.org/abs/1706.03762 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv.org ..

반응형