Internlm3 8b Instruct Gguf
模型概述
模型特點
模型能力
使用案例
🚀 InternLM3-8B-Instruct GGUF模型
internlm3-8b-instruct
的GGUF格式模型可藉助llama.cpp這一廣受歡迎的大語言模型(LLM)推理開源框架,在本地及雲端等多種硬件平臺上使用。本倉庫提供了半精度和多種低比特量化版本(如q5_0
、q5_k_m
、q6_k
和q8_0
)的internlm3-8b-instruct
GGUF格式模型。
接下來,我們將依次介紹安裝步驟、模型下載方法,最後通過具體示例說明模型推理和服務部署的方式。
📦 安裝指南
我們建議從源代碼構建llama.cpp
。以下是Linux CUDA平臺的示例代碼,其他平臺的安裝說明請參考官方指南。
- 步驟一:創建conda環境並安裝cmake
conda create --name internlm3 python=3.10 -y
conda activate internlm3
pip install cmake
- 步驟二:克隆源代碼並構建項目
git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j
所有構建的目標文件都可以在build/bin
子目錄中找到。
在後續內容中,我們假設工作目錄為llama.cpp
的根目錄。
📥 下載模型
在簡介部分中,我們提到本倉庫包含了幾種不同計算精度的模型。你可以根據自己的需求下載合適的模型。例如,可按以下方式下載internlm3-8b-instruct-fp16.gguf
:
pip install huggingface-hub
huggingface-cli download internlm/internlm3-8b-instruct-gguf internlm3-8b-instruct.gguf --local-dir . --local-dir-use-symlinks False
💻 使用示例
基礎用法
推理
你可以使用llama-cli
進行推理。關於llama-cli
的詳細說明,請參考此指南。
聊天示例
以下是使用思維繫統提示的示例:
thinking_system_prompt="<|im_start|>system\nYou are an expert mathematician with extensive experience in mathematical competitions. You approach problems through systematic thinking and rigorous reasoning. When solving problems, follow these thought processes:\n## Deep Understanding\nTake time to fully comprehend the problem before attempting a solution. Consider:\n- What is the real question being asked?\n- What are the given conditions and what do they tell us?\n- Are there any special restrictions or assumptions?\n- Which information is crucial and which is supplementary?\n## Multi-angle Analysis\nBefore solving, conduct thorough analysis:\n- What mathematical concepts and properties are involved?\n- Can you recall similar classic problems or solution methods?\n- Would diagrams or tables help visualize the problem?\n- Are there special cases that need separate consideration?\n## Systematic Thinking\nPlan your solution path:\n- Propose multiple possible approaches\n- Analyze the feasibility and merits of each method\n- Choose the most appropriate method and explain why\n- Break complex problems into smaller, manageable steps\n## Rigorous Proof\nDuring the solution process:\n- Provide solid justification for each step\n- Include detailed proofs for key conclusions\n- Pay attention to logical connections\n- Be vigilant about potential oversights\n## Repeated Verification\nAfter completing your solution:\n- Verify your results satisfy all conditions\n- Check for overlooked special cases\n- Consider if the solution can be optimized or simplified\n- Review your reasoning process\nRemember:\n1. Take time to think thoroughly rather than rushing to an answer\n2. Rigorously prove each key conclusion\n3. Keep an open mind and try different approaches\n4. Summarize valuable problem-solving methods\n5. Maintain healthy skepticism and verify multiple times\nYour response should reflect deep mathematical understanding and precise logical thinking, making your solution path and reasoning clear to others.\nWhen you're ready, present your complete solution with:\n- Clear problem understanding\n- Detailed solution process\n- Key insights\n- Thorough verification\nFocus on clear, logical progression of ideas and thorough explanation of your mathematical reasoning. Provide answers in the same language as the user asking the question, repeat the final answer using a '\\boxed{}' without any units, you have [[8192]] tokens to complete the answer.\n<|im_end|>\n"
build/bin/llama-cli \
--model internlm3-8b-instruct.gguf \
--predict 2048 \
--ctx-size 8192 \
--gpu-layers 48 \
--temp 0.8 \
--top-p 0.8 \
--top-k 50 \
--seed 1024 \
--color \
--prompt "$thinking_system_prompt" \
--interactive \
--multiline-input \
--conversation \
--verbose \
--logdir workdir/logdir \
--in-prefix "<|im_start|>user\n" \
--in-suffix "<|im_end|>\n<|im_start|>assistant\n"
然後輸入你的問題,例如Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\).
函數調用示例
llama-cli
示例:
build/bin/llama-cli \
--model internlm3-8b-instruct.gguf \
--predict 512 \
--ctx-size 4096 \
--gpu-layers 48 \
--temp 0.8 \
--top-p 0.8 \
--top-k 50 \
--seed 1024 \
--color \
--prompt '<|im_start|>system\nYou are InternLM-Chat, a harmless AI assistant.<|im_end|>\n<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>\n<|im_start|>user\n' \
--interactive \
--multiline-input \
--conversation \
--verbose \
--in-suffix "<|im_end|>\n<|im_start|>assistant\n" \
--special
對話結果:
<s><|im_start|>system
You are InternLM-Chat, a harmless AI assistant.<|im_end|>
<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>
<|im_start|>user
> I want to know today's weather in Shanghai
I need to use the get_current_weather function to get the current weather in Shanghai.<|action_start|><|plugin|>
{"name": "get_current_weather", "parameters": {"location": "Shanghai"}}<|action_end|>32
<|im_end|>
> <|im_start|>environment name=<|plugin|>\n{"temperature": 22}
The current temperature in Shanghai is 22 degrees Celsius.<|im_end|>
>
服務部署
llama.cpp
提供了一個兼容OpenAI API的服務器llama-server
。你可以按以下方式將internlm3-8b-instruct.gguf
部署為服務:
./build/bin/llama-server -m ./internlm3-8b-instruct.gguf -ngl 48
在客戶端,你可以通過OpenAI API訪問該服務:
from openai import OpenAI
client = OpenAI(
api_key='YOUR_API_KEY',
base_url='http://localhost:8080/v1'
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": " provide three suggestions about time management"},
],
temperature=0.8,
top_p=0.8
)
print(response)
📄 許可證
本項目採用Apache-2.0許可證。



