--- license: mit base_model: - jinaai/ReaderLM-v2 tags: - mlx --- # mlx-community/jinaai-ReaderLM-v2 The Model [mlx-community/jinaai-ReaderLM-v2](https://huggingface.co/mlx-community/jinaai-ReaderLM-v2) was converted to MLX format from [jinaai/ReaderLM-v2](https://huggingface.co/jinaai/ReaderLM-v2). --- ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/jinaai-ReaderLM-v2") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ``` --- ## Use with mlx model manager ```bash Add Package Dependency in xcode: https://github.com/kunal732/MLX-Model-Manager ``` ```swift import SwiftUI import MLXModelManager struct ContentView: View { @StateObject var JinaManager = ModelManager(modelPath: "mlx-community/jinaai-ReaderLM-v2") var body: some View { VStack { Button("answer prompt"){ Task { //load model try await JinaManager.loadModel() //inference await JinaManager.generate( prompt: "convert to markdown:
first paragraph.
" ) } } //Model Output Text(JinaManager.output) } .padding() } } ```