본문 바로가기

loss

(3)
[MoH 논문 리뷰] - MULTI-HEAD ATTENTION AS MIXTURE-OF-HEAD ATTENTION *MoH를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요!  MoH paper: [2410.11842] MoH: Multi-Head Attention as Mixture-of-Head Attention (arxiv.org)  MoH: Multi-Head Attention as Mixture-of-Head AttentionIn this work, we upgrade the multi-head attention mechanism, the core of the Transformer model, to improve efficiency while maintaining or surpassing the previous accuracy level. We show that multi-head attentio..
[DDPM 논문 리뷰] - Denoising Diffusion Probabilistic Models *DDPM를 위한 논문 리뷰 글입니다! 궁금하신 점은 댓글로 남겨주세요! DDPM paper: https://arxiv.org/abs/2006.11239 Denoising Diffusion Probabilistic Models We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound arxiv.org DDPM..
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1024, 1024]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. 에러코드 전체 ''' RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1024, 1024]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True) ''' 구현하고자 했던 git..

반응형