move-llm / README.md
limberc's picture
Update README.md
640eda3 verified
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
We developed a Large Language Model (LLM) on top of DeepSeek, achieving ChatGPT-4-level performance specifically for the Move programming language. This model offers advanced code generation, error handling, and context-aware support, optimized for Move’s unique requirements. By combining DeepSeek’s foundation with a Move focus, our LLM provides reliable, high-performance assistance for smart contract and blockchain development within the Move ecosystem.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [FLock.io](https://www.flock.io/)
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Start with this prompt:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("flock-io/move-llm")
model = AutoModelForCausalLM.from_pretrained("flock-io/move-llm")
# Tokenize input text
sys_prompt = "You are an expert in Aptos Move programming language."
input_text = sys_prompt + "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```