HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
服务条款
隐私政策
中文
HyperAI
HyperAI超神经
Toggle Sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
算力平台
首页
SOTA
图像分类
Image Classification On Cifar 10
Image Classification On Cifar 10
评估指标
Percentage correct
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Percentage correct
Paper Title
DINOv2 (ViT-g/14, frozen model, linear eval)
99.5
DINOv2: Learning Robust Visual Features without Supervision
ViT-H/14
99.5
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
µ2Net (ViT-L/16)
99.49
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
ViT-L/16
99.42
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
CaiT-M-36 U 224
99.4
-
CvT-W24
99.39
CvT: Introducing Convolutions to Vision Transformers
BiT-L (ResNet)
99.37
Big Transfer (BiT): General Visual Representation Learning
RDNet-L (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
RDNet-B (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
ViT-B (attn fine-tune)
99.3
Three things everyone should know about Vision Transformers
Heinsen Routing + BEiT-large 16 224
99.2
An Algorithm for Routing Vectors in Sequences
ViT-B/16 (PUGD)
99.13
Perturbated Gradients Updating within Unit Space for Deep Learning
Astroformer
99.12
Astroformer: More Data Might not be all you need for Classification
CeiT-S (384 finetune resolution)
99.1
Incorporating Convolution Designs into Visual Transformers
TNT-B
99.1
Transformer in Transformer
DeiT-B
99.1
Training data-efficient image transformers & distillation through attention
EfficientNetV2-L
99.1
EfficientNetV2: Smaller Models and Faster Training
AutoFormer-S | 384
99.1
AutoFormer: Searching Transformers for Visual Recognition
VIT-L/16 (Spinal FC, Background)
99.05
Reduction of Class Activation Uncertainty with Background Information
LaNet
99.03
Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search
0 of 264 row(s) selected.
Previous
Next
HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
服务条款
隐私政策
中文
HyperAI
HyperAI超神经
Toggle Sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
算力平台
首页
SOTA
图像分类
Image Classification On Cifar 10
Image Classification On Cifar 10
评估指标
Percentage correct
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Percentage correct
Paper Title
DINOv2 (ViT-g/14, frozen model, linear eval)
99.5
DINOv2: Learning Robust Visual Features without Supervision
ViT-H/14
99.5
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
µ2Net (ViT-L/16)
99.49
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
ViT-L/16
99.42
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
CaiT-M-36 U 224
99.4
-
CvT-W24
99.39
CvT: Introducing Convolutions to Vision Transformers
BiT-L (ResNet)
99.37
Big Transfer (BiT): General Visual Representation Learning
RDNet-L (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
RDNet-B (224 res, IN-1K pretrained)
99.31
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
ViT-B (attn fine-tune)
99.3
Three things everyone should know about Vision Transformers
Heinsen Routing + BEiT-large 16 224
99.2
An Algorithm for Routing Vectors in Sequences
ViT-B/16 (PUGD)
99.13
Perturbated Gradients Updating within Unit Space for Deep Learning
Astroformer
99.12
Astroformer: More Data Might not be all you need for Classification
CeiT-S (384 finetune resolution)
99.1
Incorporating Convolution Designs into Visual Transformers
TNT-B
99.1
Transformer in Transformer
DeiT-B
99.1
Training data-efficient image transformers & distillation through attention
EfficientNetV2-L
99.1
EfficientNetV2: Smaller Models and Faster Training
AutoFormer-S | 384
99.1
AutoFormer: Searching Transformers for Visual Recognition
VIT-L/16 (Spinal FC, Background)
99.05
Reduction of Class Activation Uncertainty with Background Information
LaNet
99.03
Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search
0 of 264 row(s) selected.
Previous
Next