rootxhacker
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,202 +1,130 @@
|
|
1 |
-
|
2 |
-
base_model: mistralai/Mistral-7B-Instruct-v0.2
|
3 |
-
library_name: peft
|
4 |
-
---
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
##
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
|
|
18 |
|
|
|
19 |
|
20 |
-
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
###
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
|
|
|
|
|
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
|
|
|
|
|
|
53 |
|
54 |
-
|
|
|
|
|
|
|
|
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
|
|
|
|
|
|
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
|
|
|
|
|
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
|
|
79 |
|
80 |
-
|
|
|
|
|
81 |
|
82 |
-
|
83 |
|
84 |
-
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
200 |
-
### Framework versions
|
201 |
-
|
202 |
-
- PEFT 0.11.2.dev0
|
|
|
1 |
+
# CodeAstra-7b: State-of-the-Art Vulnerability Detection Model 🔍🛡️
|
|
|
|
|
|
|
2 |
|
3 |
+
## Model Description
|
4 |
|
5 |
+
CodeAstra-7b is a state-of-the-art language model fine-tuned for vulnerability detection in multiple programming languages. Based on the powerful Mistral-7B-Instruct-v0.2 model, CodeAstra-7b has been specifically trained to identify potential security vulnerabilities across a wide range of popular programming languages.
|
6 |
|
7 |
+
### Key Features
|
8 |
|
9 |
+
- 🌐 **Multi-language Support**: Detects vulnerabilities in Go, Python, C, C++, Fortran, Ruby, Java, Kotlin, C#, PHP, Swift, JavaScript, and TypeScript.
|
10 |
+
- 🏆 **State-of-the-Art Performance**: Achieves cutting-edge results in vulnerability detection tasks.
|
11 |
+
- 📊 **Custom Dataset**: Trained on a proprietary dataset curated for comprehensive vulnerability detection.
|
12 |
+
- 🖥️ **Large-scale Training**: Utilized A100 GPUs for efficient and powerful training.
|
13 |
+
- 🔬 **Code Quality Analysis**: Identifies code quality issues when no vulnerabilities are found.
|
14 |
|
15 |
+
## Performance Comparison 📊
|
16 |
|
17 |
+
CodeAstra-7b significantly outperforms existing models in vulnerability detection accuracy. Here's a comparison table:
|
18 |
|
19 |
+
| Model | Accuracy (%) |
|
20 |
+
|-------------|--------------|
|
21 |
+
| CodeAstra-7b| 83.00 |
|
22 |
+
| codebert-base-finetuned-detect-insecure-code | 65.30 |
|
23 |
+
| CodeBERT | 62.08 |
|
24 |
+
| RoBERTa | 61.05 |
|
25 |
+
| TextCNN | 60.69 |
|
26 |
+
| BiLSTM | 59.37 |
|
27 |
|
28 |
+
As shown in the table, CodeAstra-7b achieves an impressive 83% accuracy, substantially surpassing other state-of-the-art models in the field of vulnerability detection.
|
29 |
|
30 |
+
## Intended Use
|
31 |
|
32 |
+
CodeAstra-7b is designed to assist developers, security researchers, and code auditors in identifying potential security vulnerabilities in source code. It can be integrated into development workflows, code review processes, or used as a standalone tool for code analysis.
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
### Code Quality Detection
|
35 |
|
36 |
+
In addition to vulnerability detection, CodeAstra-7b offers an extra layer of value by identifying code quality issues when no security vulnerabilities are found. This feature helps developers improve their code even when it's free from critical security flaws, promoting better coding practices and maintainability.
|
37 |
|
38 |
+
### Multiple Vulnerability Scenarios
|
|
|
|
|
39 |
|
40 |
+
It's important to note that while CodeAstra-7b excels at finding security issues in most cases, its performance may vary when multiple vulnerabilities are present in the same code snippet. In scenarios where two or three vulnerabilities coexist, the model might not always identify all of them correctly. Users should be aware of this limitation and consider using the model as part of a broader, multi-faceted security review process.
|
41 |
|
42 |
+
## Training 🏋️♂️
|
43 |
|
44 |
+
CodeAstra-7b was fine-tuned from the Mistral-7B-Instruct-v0.2 base model using a custom dataset specifically compiled for vulnerability detection across multiple programming languages. The training process leveraged A100 GPUs to ensure optimal performance and efficiency.
|
45 |
|
46 |
+
## Usage 💻
|
47 |
|
48 |
+
CodeAstra-7b was trained using PEFT (Parameter-Efficient Fine-Tuning). To use the model for vulnerability detection and code quality analysis, you can leverage the Hugging Face Transformers library along with PEFT. Here's how to get started:
|
49 |
|
50 |
+
```python
|
51 |
+
import torch
|
52 |
+
from peft import PeftModel, PeftConfig
|
53 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
54 |
|
55 |
+
# Load the model and tokenizer
|
56 |
+
peft_model_id = "rootxhacker/CodeAstra-7B"
|
57 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
58 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_4bit=True, device_map='auto')
|
59 |
+
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
60 |
|
61 |
+
# Load the Lora model
|
62 |
+
model = PeftModel.from_pretrained(model, peft_model_id)
|
63 |
|
64 |
+
def get_completion(query, model, tokenizer):
|
65 |
+
inputs = tokenizer(query, return_tensors="pt")
|
66 |
+
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.7)
|
67 |
+
return tokenizer.decode(outputs[0], skip_special_tokens=True)
|
68 |
|
69 |
+
# Example usage
|
70 |
+
code_to_analyze = """
|
71 |
+
def user_input():
|
72 |
+
name = input("Enter your name: ")
|
73 |
+
print("Hello, " + name + "!")
|
74 |
|
75 |
+
user_input()
|
76 |
+
"""
|
77 |
|
78 |
+
query = f"Analyze this code for vulnerabilities and quality issues:\n{code_to_analyze}"
|
79 |
+
result = get_completion(query, model, tokenizer)
|
80 |
+
print(result)
|
81 |
+
```
|
82 |
|
83 |
+
This script loads the CodeAstra-7b model, tokenizer, and provides a function to generate completions. You can use this setup to analyze code for vulnerabilities and quality issues.
|
84 |
|
85 |
+
## Limitations ⚠️
|
86 |
|
87 |
+
While CodeAstra-7b represents a significant advancement in automated vulnerability detection and code quality analysis, it's important to note that:
|
88 |
|
89 |
+
1. The model may not catch all vulnerabilities or code quality issues and should be used as part of a comprehensive security and code review strategy.
|
90 |
+
2. In cases where multiple vulnerabilities (two or three) are present in the same code snippet, the model might not identify all of them correctly.
|
91 |
+
3. False positives are possible, and results should be verified by human experts.
|
92 |
+
4. The model's performance may vary depending on the complexity and context of the code being analyzed.
|
93 |
|
94 |
+
## Citation 📜
|
95 |
|
96 |
+
If you use CodeAstra-7b in your research or project, please cite it as follows:
|
97 |
|
98 |
+
```
|
99 |
+
@software{CodeAstra-7b,
|
100 |
+
author = {Harish Santhanalakshmi Ganesan},
|
101 |
+
title = {CodeAstra-7b: State-of-the-Art Vulnerability Detection Model},
|
102 |
+
year = {2024},
|
103 |
+
howpublished = {\url{https://huggingface.co/your-username/CodeAstra-7b}}
|
104 |
+
}
|
105 |
+
```
|
106 |
|
107 |
+
## License 📄
|
108 |
|
109 |
+
CodeAstra-7b is released under the Apache License 2.0.
|
110 |
|
111 |
+
```
|
112 |
+
Copyright 2024 [Harish Santhanalakshmi Ganesan]
|
113 |
|
114 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
115 |
+
you may not use this file except in compliance with the License.
|
116 |
+
You may obtain a copy of the License at
|
117 |
|
118 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
119 |
|
120 |
+
Unless required by applicable law or agreed to in writing, software
|
121 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
122 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
123 |
+
See the License for the specific language governing permissions and
|
124 |
+
limitations under the License.
|
125 |
+
```
|
126 |
|
127 |
+
## Acknowledgements 🙏
|
128 |
|
129 |
+
We would like to thank the Mistral AI team for their excellent base model, which served as the foundation for CodeAstra-7b.
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|