Update README.md
Browse files
README.md
CHANGED
@@ -300,6 +300,15 @@ Our cyber attack uplift study investigated whether LLMs can enhance human capabi
|
|
300 |
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
|
301 |
|
302 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
303 |
**Disclaimers**
|
304 |
|
305 |
[osllm.ai](https://osllm.ai) is not the creator, originator, or owner of any Model featured in the Community Model Program.
|
@@ -314,9 +323,3 @@ Model will meet your requirements, be secure, uninterrupted, or available at any
|
|
314 |
error-free, virus-free, or that any errors will be corrected, or otherwise. You will be solely responsible for
|
315 |
any damage resulting from your use of or access to the Community Models, your downloading of any Community
|
316 |
Model, or use of any other Community Model provided by or through [osllm.ai](https://osllm.ai).
|
317 |
-
|
318 |
-
## Ethical Considerations and Limitations
|
319 |
-
|
320 |
-
The core values of Llama 3.3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
|
321 |
-
|
322 |
-
But Llama 3.3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.3 model, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
|
|
300 |
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
|
301 |
|
302 |
|
303 |
+
## Ethical Considerations and Limitations
|
304 |
+
|
305 |
+
The core values of Llama 3.3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
|
306 |
+
|
307 |
+
But Llama 3.3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.3 model, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
308 |
+
|
309 |
+
|
310 |
+
|
311 |
+
|
312 |
**Disclaimers**
|
313 |
|
314 |
[osllm.ai](https://osllm.ai) is not the creator, originator, or owner of any Model featured in the Community Model Program.
|
|
|
323 |
error-free, virus-free, or that any errors will be corrected, or otherwise. You will be solely responsible for
|
324 |
any damage resulting from your use of or access to the Community Models, your downloading of any Community
|
325 |
Model, or use of any other Community Model provided by or through [osllm.ai](https://osllm.ai).
|
|
|
|
|
|
|
|
|
|
|
|