King-Harry commited on
Commit
7ce6937
·
verified ·
1 Parent(s): 60a46d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -10
README.md CHANGED
@@ -1,10 +1,89 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- datasets:
5
- - HarryRoy/ninja-redact-2-large
6
- language:
7
- - en
8
- ---
9
-
10
- Feel free to modify any part of the model card to better suit your needs!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Model Card: Ninja-Masker-2-PII-Redaction
4
+
5
+ ## Model Overview
6
+
7
+ **Model Name:** Ninja-Masker-2-PII-Redaction
8
+ **Model Type:** Language Model for PII Redaction
9
+ **License:** Apache 2.0
10
+ **Model Creator:** King Harry (Roy)
11
+ **Model Repository:** [Hugging Face Hub - Ninja-Masker-2-PII-Redaction](https://huggingface.co/King-Harry/Ninja-Masker-2-PII-Redaction)
12
+
13
+ ### Model Description
14
+
15
+ Ninja-Masker-2-PII-Redaction is a fine-tuned language model designed to identify and redact Personally Identifiable Information (PII) from text data. The model is based on the Meta-Llama-3.1-8B architecture and has been fine-tuned on a dataset of over 30,000 input-output pairs to perform accurate PII masking using a set of predefined tags.
16
+
17
+ ### Preprocessing
18
+
19
+ The training data was formatted using a specific Alpaca-style prompt structure. Each prompt was paired with an instruction and input context, and the model was trained to generate the appropriate redacted output. The model was trained on a variety of PII types, including but not limited to names, email addresses, phone numbers, and credit card information.
20
+
21
+ ### Quantization and Optimization
22
+
23
+ To optimize performance and reduce memory usage, the model was fine-tuned using 4-bit quantization. Additional optimizations included the use of Flash Attention (Xformers) and gradient checkpointing, which allowed for efficient training and inference.
24
+
25
+ ### Training Details
26
+
27
+ - **Dataset:** HarryRoy/Ninja-Redact-2-large (Custom PII redaction dataset)
28
+ - **Training Environment:** Google Colab, NVIDIA A100 GPU
29
+ - **Training Framework:** PyTorch with Hugging Face Transformers, Unsloth
30
+ - **Training Configuration:**
31
+ - Max sequence length: 2048 tokens
32
+ - Batch size: 8
33
+ - Gradient accumulation steps: 4
34
+ - Learning rate: 1e-5
35
+ - Epochs: 1 (500 steps)
36
+ - Optimizer: AdamW 8-bit
37
+
38
+ ### Model Performance
39
+
40
+ The model was evaluated based on its ability to accurately redact PII from text while maintaining the original context and meaning. The fine-tuning process resulted in a model that effectively identifies and replaces PII with the appropriate tags in various text scenarios.
41
+
42
+ ### Use Cases
43
+
44
+ - **Data Anonymization:** Useful for redacting PII in datasets before sharing or analysis.
45
+ - **Email and Document Redaction:** Can be integrated into email processing systems or document management workflows to automatically redact sensitive information.
46
+ - **Customer Support:** Enhances customer support systems by ensuring PII is automatically redacted in customer communications.
47
+
48
+ ### Limitations
49
+
50
+ - **Tag Set:** The model relies on a predefined set of tags for redaction. It may not recognize PII types outside of this set.
51
+ - **Context Dependence:** While the model performs well in most scenarios, its accuracy may decrease with highly complex or ambiguous input contexts.
52
+ - **Inference Speed:** Depending on the hardware, the model's inference speed may vary, especially for long sequences.
53
+
54
+ ### Ethical Considerations
55
+
56
+ The model is designed for responsible data management, ensuring that sensitive information is properly anonymized. However, users should be aware of the limitations and should not rely solely on automated redaction for highly sensitive data.
57
+
58
+ ### How to Use
59
+
60
+ To use this model, you can load it from the Hugging Face Hub and integrate it into your Python or API-based applications. Below is an example of how to load and use the model:
61
+
62
+ ```python
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
+
65
+ model_name = "King-Harry/Ninja-Masker-2-PII-Redaction"
66
+ model = AutoModelForCausalLM.from_pretrained(model_name)
67
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
68
+
69
+ input_text = "Write an email to Kendra Harvey at [email protected] summarizing the key findings from a recent cognitive therapy conference."
70
+ inputs = tokenizer(input_text, return_tensors="pt")
71
+ outputs = model.generate(**inputs, max_new_tokens=64)
72
+
73
+ redacted_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
74
+ print(redacted_text)
75
+ ```
76
+
77
+ ### Citation
78
+
79
+ If you use this model, please consider citing the model repository:
80
+
81
+ ```bibtex
82
+ @misc{ninja_masker_2024,
83
+ author = {King Harry (Roy)},
84
+ title = {Ninja-Masker-2-PII-Redaction},
85
+ year = {2024},
86
+ url = {https://huggingface.co/King-Harry/Ninja-Masker-2-PII-Redaction},
87
+ }
88
+ ```
89
+