WEKO3
アイテム
Vision transformers with Inductive Bias introduced through self-attention regularization
http://hdl.handle.net/10911/00040960
http://hdl.handle.net/10911/000409607c058c08-cc5b-40b9-9d18-309425b5b9ee
Item type | 学位論文 / Thesis or Dissertation(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2023-03-23 | |||||
タイトル | ||||||
タイトル | Vision transformers with Inductive Bias introduced through self-attention regularization | |||||
言語 | en | |||||
言語 | ||||||
言語 | eng | |||||
キーワード | ||||||
言語 | en | |||||
主題Scheme | Other | |||||
主題 | Inductive Bias | |||||
キーワード | ||||||
言語 | en | |||||
主題Scheme | Other | |||||
主題 | Vision Transformer | |||||
キーワード | ||||||
言語 | en | |||||
主題Scheme | Other | |||||
主題 | Self-supervised Learning | |||||
資源タイプ | ||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_db06 | |||||
資源タイプ | doctoral thesis | |||||
アクセス権 | ||||||
アクセス権 | open access | |||||
アクセス権URI | http://purl.org/coar/access_right/c_abf2 | |||||
著者 |
BARBOSA, MORMILLE LUIZ HENRIQUE
× BARBOSA, MORMILLE LUIZ HENRIQUE |
|||||
抄録 | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | In recent years, the Transformer achieved remarkable results in computer vision related tasks, matching, or even surpassing those of convolutional neural networks (CNN). However, unlike CNNs, those vision transformers lack strong inductive biases and, to achieve state-of-the-art results, rely on large architectures and extensive pre-training on tens of millions of images. Introducing the appropriate inductive biases to vision transformers can lead to better convergence and generalization on settings with fewer training data. This work presents a novel way to introduce inductive biases to vision transformers: self-attention regularization. Two different methods of self-attention regularization were devised. Furthermore, this work proposes ARViT, a novel vision transformer architecture, where both self-attention regularization methods are deployed. The experimental results demonstrated that self-attention regularization leads to better convergence and generalization, especially on models pre-trained on mid-size datasets. | |||||
学位名 | ||||||
学位名 | 博士(工学) | |||||
学位授与機関 | ||||||
学位授与機関名 | 創価大学 | |||||
学位授与年月日 | ||||||
学位授与年月日 | 2023-03-18 | |||||
学位授与番号 | ||||||
学位授与番号 | 甲第198号 | |||||
著者版フラグ | ||||||
出版タイプ | VoR | |||||
出版タイプResource | http://purl.org/coar/version/c_970fb48d4fbd8a85 |