SegFormer_b2_10
This model is a fine-tuned version of nvidia/segformer-b2-finetuned-cityscapes-1024-1024 on an unknown dataset. It achieves the following results on the evaluation set:
- eval_loss: 0.8097
- eval_mean_iou: 0.7782
- eval_mean_accuracy: 0.8662
- eval_overall_accuracy: 0.9610
- eval_accuracy_road: 0.9903
- eval_accuracy_sidewalk: 0.9398
- eval_accuracy_building: 0.9628
- eval_accuracy_wall: 0.6563
- eval_accuracy_fence: 0.6967
- eval_accuracy_pole: 0.7388
- eval_accuracy_traffic light: 0.8695
- eval_accuracy_traffic sign: 0.8842
- eval_accuracy_vegetation: 0.9628
- eval_accuracy_terrain: 0.7418
- eval_accuracy_sky: 0.9803
- eval_accuracy_person: 0.9096
- eval_accuracy_rider: 0.7585
- eval_accuracy_car: 0.9782
- eval_accuracy_truck: 0.8827
- eval_accuracy_bus: 0.9472
- eval_accuracy_train: 0.8521
- eval_accuracy_motorcycle: 0.8134
- eval_accuracy_bicycle: 0.8929
- eval_iou_road: 0.9845
- eval_iou_sidewalk: 0.8702
- eval_iou_building: 0.9233
- eval_iou_wall: 0.5943
- eval_iou_fence: 0.5845
- eval_iou_pole: 0.5944
- eval_iou_traffic light: 0.6817
- eval_iou_traffic sign: 0.7808
- eval_iou_vegetation: 0.9255
- eval_iou_terrain: 0.6590
- eval_iou_sky: 0.9506
- eval_iou_person: 0.7943
- eval_iou_rider: 0.5888
- eval_iou_car: 0.9484
- eval_iou_truck: 0.8282
- eval_iou_bus: 0.8676
- eval_iou_train: 0.8049
- eval_iou_motorcycle: 0.6449
- eval_iou_bicycle: 0.7604
- eval_runtime: 188.4044
- eval_samples_per_second: 2.654
- eval_steps_per_second: 0.663
- epoch: 34.4086
- step: 6400
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.48.1
- Pytorch 2.1.2+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 0