runtime error

Exit code: 1. Reason: tokenizer_config.json: 0%| | 0.00/3.13k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 3.13k/3.13k [00:00<00:00, 13.1MB/s] tokenizer.json: 0%| | 0.00/7.85M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 7.85M/7.85M [00:00<00:00, 83.2MB/s] config.json: 0%| | 0.00/1.73k [00:00<?, ?B/s] config.json: 100%|██████████| 1.73k/1.73k [00:00<00:00, 10.2MB/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 652, in resolve_trust_remote_code answer = input( EOFError: EOF when reading a line During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1024, in from_pretrained trust_remote_code = resolve_trust_remote_code( File "/usr/local/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 665, in resolve_trust_remote_code raise ValueError( ValueError: The repository for deepseek-ai/DeepSeek-V3 contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/deepseek-ai/DeepSeek-V3. Please pass the argument `trust_remote_code=True` to allow custom code to be run. The repository for deepseek-ai/DeepSeek-V3 contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/deepseek-ai/DeepSeek-V3. You can avoid this prompt in future by passing the argument `trust_remote_code=True`. Do you wish to run the custom code? [y/N]

Container logs:

Fetching error logs...