YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

VietCoMath Model Usage

Overview

This example snipe code for running the VietCoMath-01 small model for mathematical Coding problem-solving and General Multi tasks.

Helper Functions

import re

def check_patterns(response):
    """
    Check if the response contains all required XML patterns.
    
    Args:
        response (str): The model's generated response
    
    Returns:
        str: Parsed response or 'Missing' if patterns are incomplete
    """
    patterns = {
        'answer': r'<answer>(.*?)</answer>',
        'reflection': r'<reflection>(.*?)</reflection>',
        'steps': r'<step>(.*?)</step>',
        'count': r'<count>(.*?)</count>'
    }
    
    matches = {
        'answer': re.search(patterns['answer'], response, re.DOTALL),
        'reflection': re.search(patterns['reflection'], response, re.DOTALL),
        'steps': re.findall(patterns['steps'], response, re.DOTALL),
        'count': re.findall(patterns['count'], response, re.DOTALL)
    }
    
    return "Missing" if not all([matches['answer'], matches['reflection'], matches['steps'], matches['count']]) else response

def parse_response(response):
    """
    Parse the model's response and extract key components.
    
    Args:
        response (str): The model's generated response
    
    Returns:
        tuple: Parsed answer, reflection, steps, and clarification
    """
    response_check = check_patterns(response)
    
    if response_check == "Missing":
        clarification_match = re.search(r'<clarification>(.*?)</clarification>', response, re.DOTALL)
        clarification = clarification_match.group(1).strip() if clarification_match else response
        return "", "", [], clarification
    else:
        answer_match = re.search(r'<answer>(.*?)</answer>', response, re.DOTALL)
        reflection_match = re.search(r'<reflection>(.*?)</reflection>', response, re.DOTALL)
        
        answer = answer_match.group(1).strip() if answer_match else ""
        reflection = reflection_match.group(1).strip() if reflection_match else ""
        steps = re.findall(r'<step>(.*?)</step>', response, re.DOTALL)
        
        return answer, reflection, steps, ""

Usage

Basic Text Generation

import transformers
import torch

# Load the model
model_id = "VietnamAIHub/VietCoMath-o1-8B"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)


# Example mathematical word problem

problem = "Có 100 sinh viên đỗ đại học. Trong số đó, có 55 sinh viên chọn âm nhạc, 44 sinh viên chọn thể thao, và 20 sinh viên chọn cả 2. Hỏi có bao nhiêu sinh viên không chọn âm nhạc, cũng không chọn thể thao?"

# Prepare messages
messages = [
    {"role": "system", "content": ""},
    {"role": "user", "content": f"{problem}"},
]

# Define terminators
terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

# Generate text
outputs = pipeline(
    messages,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)

# Print generated text
generated_text=outputs[0]["generated_text"][-1]

answer, reflection, steps, clarification = parse_response(generated_text)

print(clarification)
print("------------Internal Thinking-------------")
print(steps)
print(reflection)
print("------------End of Internal Thinking-------------\n")

print("------------Final Answer-------------")
print(answer)
print("------------End of Answer-------------")

## Limitations
- The model is Small scale May Failed in Very difficult problems, Please check the result


## License
[Model is based LLama 3B]

## Citation

@misc {VietnamAIHub,
    author       = { {VietnamAIHub} },
    title        = { VietCoMath-o1-8B},
    year         = 2024,
    url          = { https://huggingface.co/VietnamAIHub/VietCoMath-o1-8B },
    doi          = { 10.57967/hf/3743 },
    publisher    = { Hugging Face }
}

## Collaboration & Contribution
Bạn có thể kết nối trực tiếp với Trần Nhiệm [email protected]
Hoặc có thể chat trực tiếp ở: LinkedIn Facebook. X. Zalo +886 934 311 751
Downloads last month
6
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Collection including VietnamAIHub/VietCoMath-o1-8B