[Solution: update to 0.2.15] unknown model architecture.
i am unable to load the model.
{
"cause": "llama.cpp error: 'unknown model architecture: 'gemma''",
"errorData": {
"n_ctx": 4096,
"n_batch": 512,
"n_gpu_layers": 21
},
"data": {
"memory": {
"ram_capacity": "15.69 GB",
"ram_unused": "6.44 GB"
},
"gpu": {
"type": "NvidiaCuda",
"vram_recommended_capacity": "6.00 GB",
"vram_unused": "4.96 GB"
},
If you’re running into this error it means you’re not yet on LM Studio 0.2.15. Get it from: https://lmstudio.ai
Issue with loading gemma. using latest version of lmstudio-v16
{
"cause": "(Exit code: 0). Some model operation failed. Try a different model and/or config.",
"suggestion": "",
"data": {
"memory": {
"ram_capacity": "127.12 GB",
"ram_unused": "113.71 GB"
},
"gpu": {
"type": "NvidiaCuda",
"vram_recommended_capacity": "12.00 GB",
"vram_unused": "11.01 GB"
},
"os": {
"platform": "win32",
"version": "10.0.22631",
"supports_avx2": true
},
"app": {
"version": "0.2.16",
"downloadsDir": "C:\\Users\\Win\\Downloads\\Gemma"
},
"model": {}
},
"title": "Model error"
}```
```json
{
"cause": "(Exit code: -1073740791). Unknown error. Try a different model and/or config.",
"suggestion": "",
"data": {
"memory": {
"ram_capacity": "127.12 GB",
"ram_unused": "113.55 GB"
},
"gpu": {
"type": "NvidiaCuda",
"vram_recommended_capacity": "12.00 GB",
"vram_unused": "11.01 GB"
},
"os": {
"platform": "win32",
"version": "10.0.22631",
"supports_avx2": true
},
"app": {
"version": "0.2.16",
"downloadsDir": "C:\\Users\\Win\\Downloads\\Gemma"
},
"model": {}
},
"title": "Model error"
}```
![image.png](https://cdn-uploads.huggingface.co/production/uploads/65f295422b802e871319c039/6CrrNeGvC-egVB-Q7uaSh.png)
If you’re running into this error it means you’re not yet on LM Studio 0.2.15. Get it from: https://lmstudio.ai
I am getting the same error. and I have just installed a latest model 2.18
This also appears to be a bug in the version 0.2.21
@matheus-alegre can you please share a screenshot of the error you are getting? If possible, include the whole app screen in the image
Sure! I will also share a temporary "solution" people facing the same problem may use.
{
"cause": "(Exit code: 0). Some model operation failed. Try a different model and/or config.",
"suggestion": "",
"data": {
"memory": {
"ram_capacity": "15.29 GB",
"ram_unused": "6.41 GB"
},
"gpu": {
"type": "NvidiaCuda",
"vram_recommended_capacity": "12.00 GB",
"vram_unused": "11.01 GB"
},
"os": {
"platform": "win32",
"version": "10.0.22631",
"supports_avx2": true
},
"app": {
"version": "0.2.21",
"downloadsDir": "C:\\Users\\matheus.alegre.CDSSERVER\\.cache\\lm-studio\\models"
},
"model": {}
},
"title": "Error loading model."
}
Alternative solution that may work for some computers:
Nvidia CUDA isn't compatible with some computers using this new version of LM Studio.
You can use your integrated video card until they fix it or provide another solution. Here's how:
Go to "Hardware Settings" > "Detected GPU Type," right-click on your video card, and choose the integrated one if you have it.
@matheus-alegre
Hey i just want to ask where can i find the "Hardware Settings" because lm studio's interface is weirdly hard to understand. thank you
{
"cause": "(Exit code: -1073740791). Unknown error. Try a different model and/or config.",
"suggestion": "",
"data": {
"memory": {
"ram_capacity": "7.71 GB",
"ram_unused": "2.57 GB"
},
"gpu": {
"type": "NvidiaCuda",
"vram_recommended_capacity": "6.00 GB",
"vram_unused": "5.04 GB"
},
"os": {
"platform": "win32",
"version": "10.0.22631",
"supports_avx2": true
},
"app": {
"version": "0.2.21",
"downloadsDir": "C:\Users\wangz\.cache\lm-studio\models"
},
"model": {}
},
"title": "Error loading model."
}
@matheus-alegre
Hey i just want to ask where can i find the "Hardware Settings" because lm studio's interface is weirdly hard to understand. thank you
On the left side of the screen, go to Local Server and Stop Server if you started it:
Before you load the model, go to Advanced Configuration on the right side of the screen and you will find it:
version 0.2.23, was having the same issue - the above solution worked for me.
Local Server / Advanced Configuration / GPU Settings / Detected GPU Type - right-clicked to change this from Nvidia CUDA to OpenCL.
Solved it by changing the Detected GPU Type or GPU Backend Type to OpenCL llama.cpp
Just right click on it and manually select the above option instead of CUDA