--- library_name: transformers tags: - trainer datasets: - xavierwoon/cestertest - xavierwoon/cestereval base_model: - google-bert/bert-base-uncased --- # Model Card for Model ID Cesterrewards is a Bert model that is able to predict the code coverage of Libcester unit test cases. ## Model Details ### Model Description - **Developed by:** Xavier Woon - **Model type:** Bert - **Finetuned from model [optional]:** google-bert/bert-base-uncased ### Recommendations Expanding the dataset will help increase the accuracy and robustness of the model, and improve code coverage predictions based on real life scenarios. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch reward_name = "xavierwoon/cesterrewards" reward_model = AutoModelForSequenceClassification.from_pretrained(reward_name) tokenizer = AutoTokenizer.from_pretrained(reward_name) # Change the prompt to sample unit test cases in Libcester format prompt = """ CESTER_TEST(create_stack, test_instance, { struct Stack stack; initStack(&stack); cester_assert_equal(stack.top, -1); }) """ inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True, max_length=512) # Put the model in evaluation mode reward_model.eval() # Perform inference to get the reward score with torch.no_grad(): outputs = reward_model(**inputs) reward_score = outputs.logits.item() # Extract the scalar value print("Expected Code Coverage:", reward_score) ``` ## Training Details ### Training Data Training Data was created based on Data Structures and Algorithm (DSA) codes created using ChatGPT. It would also create corresponding Cester test cases. After testing the code coverage, it was added to the dataset under `score`. ### Training Procedure 1. Prompt GPT for sample DSA C code 2. Prompt GPT for Libcester unit test cases with 100% code coverage 3. Test generated test cases for code coverage and note down