|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- code |
|
- vulnerability |
|
- cybersecurity |
|
- research |
|
pretty_name: CVE-LLm-Dataset |
|
data_source: Custom data collected from the CVE database |
|
data_formats: JSONL |
|
size_categories: |
|
- n<1K |
|
--- |
|
# CVE-llm_dataset |
|
This dataset is intended to train an LLM model for an utterly CVE-focused input and output. |
|
|
|
## Data extraction: |
|
For the data extraction, I first downloaded the CVE database from NVD lists and then loaded them using the `cve_dataset_2.py` and `cve_dataset.py` both have produce different datasets one is for llama and the other is for openai GPT. |
|
|
|
The CVE json files are mapped in this format: |
|
``` |
|
cves: |
|
| |
|
├─1999 |
|
| ├─0xxx |
|
| | ├─CVE-1999-0001.json |
|
| | ├─.... |
|
| | └─CVE-1999-0999.json |
|
| └─1xxx |
|
| ├─CVE-1999-1000.json |
|
| ├─.... |
|
| └─CVE-1999-1598.json |
|
└─2023 |
|
|
|
``` |
|
The programs traverse through these folders extract the data in the files and arrange them into usable formats for the fine-tuning process. |
|
|
|
## llama2 and Openai Model dataset: |
|
The llama2 fine-tuned dataset follows this format: |
|
```json |
|
{ |
|
"instruction": "Explain CVE-1999-0001", |
|
"input": "Explain the vulnerability: CVE-1999-0001", |
|
"output": "ip_input.c in BSD-derived TCP/IP implementations allows remote attackers to cause a denial of service (crash or hang) via crafted packets.\nAffected Products: n/a\nReferences: [{'tags': ['x_refsource_CONFIRM'], 'url': 'http://www.openbsd.org/errata23.html#tcpfix'}, {'name': '5707', 'tags': ['vdb-entry', 'x_refsource_OSVDB'], 'url': 'http://www.osvdb.org/5707'}]\nCVE State: PUBLISHED" |
|
} |
|
``` |
|
|
|
The openai fine-tune dataset follows this format: |
|
```json |
|
{ |
|
"messages": [ |
|
{"role": "system", "content": "CVE Vulnerability Information"}, |
|
{"role": "user", "content": f"Explain the CVE ID: What does the identifier {cve_id} mean in the context of cybersecurity?"}, |
|
{"role": "assistant", "content": cve_id} |
|
] |
|
}, |
|
{ |
|
"messages": [ |
|
{"role": "system", "content": "CVE Vulnerability Information"}, |
|
{"role": "user", "content": f"Describe the vulnerability: Provide detailed information about the vulnerability for {cve_id}."}, |
|
{"role": "assistant", "content": description} |
|
] |
|
}, |
|
{ |
|
"messages": [ |
|
{"role": "system", "content": "CVE Vulnerability Information"}, |
|
{"role": "user", "content": f"Provide references for further information: Can you share references or external links related to {cve_id}?"}, |
|
{"role": "assistant", "content": references} |
|
] |
|
}, |
|
{ |
|
"messages": [ |
|
{"role": "system", "content": "CVE Vulnerability Information"}, |
|
{"role": "user", "content": f"Explain the state of this CVE: What is the publication or resolution state of {cve_id}?"}, |
|
{"role": "assistant", "content": cve_state} |
|
] |
|
} |
|
``` |
|
|
|
The instruction is what we instruct the AI to do with the data provided For example we can command the AI `To take in user input analyze it and then based on what he asks returns an answer` This is also where we can add a `role` or a `personal` to the AI. |
|
|
|
The input is the user Input of the main query or data that must be processed by the AI. This is a crucial piece of information that the AI will process in order to provide an output. |
|
|
|
The output is the format that we define and tell the AI to generate answers in that format or provide that answer to the question asked. |