ASM Long
Collection
Neural Long Context Assembly Transpilation
•
19 items
•
Updated
This model is a fine-tuned version of Qwen/Qwen2.5-Coder-1.5B-Instruct on the anghabench_armv8 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.0009 | 1.0309 | 61000 | 0.0013 |
Base model
Qwen/Qwen2.5-1.5B