GLM 4.1V 9B MLX 4bit
模型概述
該模型是從THUDM/GLM-4.1V-9B-Thinking轉換而來,採用MLX格式,支持視覺語言理解和生成任務。
模型特點
MLX格式支持
模型已轉換為MLX格式,適用於蘋果芯片設備
4位量化
模型經過4位量化處理,減少內存佔用
視覺語言能力
支持圖像理解和基於圖像的文本生成
模型能力
視覺語言理解
圖像描述生成
視覺問答
多模態推理
使用案例
內容生成
圖像描述生成
根據輸入圖像生成詳細描述
智能問答
視覺問答
回答關於圖像內容的問題
🚀 Rainnighttram/GLM-4.1V-9B-MLX-4bit
本項目將模型 Rainnighttram/GLM-4.1V-9B-MLX-4bit 從 THUDM/GLM-4.1V-9B-Thinking 轉換為 MLX 格式,使用的是 mlx-lm 版本 0.26.0。
🚀 快速開始
注意事項
這並非該模型的官方倉庫,因此不會有官方對該模型提供支持。若要加載此模型,你需要手動調整 MLX-VLM 包。目前,模型的轉換和加載過程可能會出現問題和混亂。
安裝依賴
pip install mlx-lm mlx-vlm mlx torchvision
配置模型文件
在 "models" 目錄下為 mlx-vlm 配置模型文件:
mkdir glm4v
cd glm4v
創建必要的模型文件
__init__.py
nano __init__.py
# In file: mlx_vlm/models/glm4v/__init__.py
from .glm4v import Model, ModelConfig
from .language import LanguageModel, TextConfig
from .vision import VisionModel, VisionConfig
# save and exit
language.py
nano language.py
# In file: language.py
import inspect
from dataclasses import dataclass
from typing import Any, Optional, Dict, List, Tuple
import mlx.core as mx
import mlx.nn as nn
from ..base import (
create_attention_mask,
scaled_dot_product_attention,
)
# Define the complete output class with all optional attributes the generator might check for.
@dataclass
class CausalLMOutput:
logits: mx.array
cross_attention_states: Optional[Tuple] = None
encoder_outputs: Optional[Tuple] = None
hidden_states: Optional[Tuple] = None
attentions: Optional[Tuple] = None
@dataclass
class TextConfig:
model_type: str
hidden_size: int
num_hidden_layers: int
intermediate_size: int
num_attention_heads: int
attention_bias: bool
rms_norm_eps: float
vocab_size: int
num_key_value_heads: int
partial_rotary_factor: float
rope_theta: float
rope_traditional: bool = True
max_position_embeddings: int = 65536
@classmethod
def from_dict(cls, params):
return cls(
**{
k: v
for k, v in params.items()
if k in inspect.signature(cls).parameters
}
)
class Glm4MLP(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.gate_up_proj = nn.QuantizedLinear(
args.hidden_size, 2 * args.intermediate_size, bias=False
)
self.down_proj = nn.QuantizedLinear(
args.intermediate_size, args.hidden_size, bias=False
)
def __call__(self, x) -> mx.array:
x = self.gate_up_proj(x)
gate, up_states = mx.split(x, 2, axis=-1)
return self.down_proj(nn.silu(gate) * up_states)
class Glm4Attention(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.head_dim = args.hidden_size // args.num_attention_heads
self.n_heads = args.num_attention_heads
self.n_kv_heads = args.num_key_value_heads
self.scale = self.head_dim ** -0.5
bias = args.attention_bias
q_out = args.num_attention_heads * self.head_dim
kv_out = args.num_key_value_heads * self.head_dim
self.q_proj = nn.QuantizedLinear(args.hidden_size, q_out, bias=bias)
self.k_proj = nn.QuantizedLinear(args.hidden_size, kv_out, bias=bias)
self.v_proj = nn.QuantizedLinear(args.hidden_size, kv_out, bias=bias)
self.o_proj = nn.QuantizedLinear(q_out, args.hidden_size, bias=False)
self.rope = nn.RoPE(
dims=int(self.head_dim * args.partial_rotary_factor),
base=args.rope_theta,
traditional=args.rope_traditional,
)
def __call__(
self, x: mx.array, mask: Optional[mx.array] = None, cache: Optional[Any] = None
) -> mx.array:
B, L, D = x.shape
queries, keys, values = self.q_proj(x), self.k_proj(x), self.v_proj(x)
queries = queries.reshape(B, L, self.n_heads, -1).transpose(0, 2, 1, 3)
keys = keys.reshape(B, L, self.n_kv_heads, -1).transpose(0, 2, 1, 3)
values = values.reshape(B, L, self.n_kv_heads, -1).transpose(0, 2, 1, 3)
if cache is not None:
queries = self.rope(queries, offset=cache.offset)
keys = self.rope(keys, offset=cache.offset)
keys, values = cache.update_and_fetch(keys, values)
else:
queries = self.rope(queries)
keys = self.rope(keys)
output = scaled_dot_product_attention(
queries, keys, values, cache=cache, scale=self.scale, mask=mask
)
output = output.transpose(0, 2, 1, 3).reshape(B, L, -1)
return self.o_proj(output)
class Glm4DecoderLayer(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.self_attn = Glm4Attention(args=args)
self.mlp = Glm4MLP(args)
self.input_layernorm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
self.post_attention_layernorm = nn.RMSNorm(
args.hidden_size, eps=args.rms_norm_eps
)
self.post_self_attn_layernorm = nn.RMSNorm(
args.hidden_size, eps=args.rms_norm_eps
)
self.post_mlp_layernorm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
def __call__(
self, x: mx.array, mask: Optional[mx.array] = None, cache: Optional[Any] = None
) -> mx.array:
x = x + self.post_self_attn_layernorm(
self.self_attn(self.input_layernorm(x), mask, cache)
)
residual = x
x = (
self.post_mlp_layernorm(self.mlp(self.post_attention_layernorm(x)))
+ residual
)
return x
class Glm4Model(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.embed_tokens = nn.QuantizedEmbedding(args.vocab_size, args.hidden_size)
self.layers = [
Glm4DecoderLayer(args=args) for _ in range(args.num_hidden_layers)
]
self.norm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
def __call__(
self,
inputs: mx.array,
mask: Optional[mx.array] = None,
cache: Optional[Any] = None,
inputs_embeds: Optional[mx.array] = None,
):
if inputs_embeds is not None:
h = inputs_embeds
else:
h = self.embed_tokens(inputs)
if mask is None:
mask = create_attention_mask(h, cache)
if cache is None:
cache = [None] * len(self.layers)
for layer, c in zip(self.layers, cache):
h = layer(h, mask, cache=c)
return self.norm(h)
class LanguageModel(nn.Module):
def __init__(self, config: TextConfig):
super().__init__()
self.config = config
self.model_type = config.model_type
self.model = Glm4Model(config)
self.lm_head = nn.QuantizedLinear(config.hidden_size, config.vocab_size, bias=False)
def __call__(
self,
inputs: mx.array,
inputs_embeds: Optional[mx.array] = None,
mask: Optional[mx.array] = None,
cache=None,
):
out = self.model(inputs, inputs_embeds=inputs_embeds, mask=mask, cache=cache)
out = self.lm_head(out)
# --- THIS IS THE FIX ---
# Return a consistent object type
return CausalLMOutput(logits=out)
@property
def layers(self):
return self.model.layers
# save and exit
vision.py
nano vision.py
#In file vision.py
import inspect
from dataclasses import dataclass
from typing import Any, Optional, Dict, List, Tuple
import mlx.core as mx
import mlx.nn as nn
from ..base import (
create_attention_mask,
scaled_dot_product_attention,
)
# Define the complete output class with all optional attributes the generator might check for.
@dataclass
class CausalLMOutput:
logits: mx.array
cross_attention_states: Optional[Tuple] = None
encoder_outputs: Optional[Tuple] = None
hidden_states: Optional[Tuple] = None
attentions: Optional[Tuple] = None
@dataclass
class TextConfig:
model_type: str
hidden_size: int
num_hidden_layers: int
intermediate_size: int
num_attention_heads: int
attention_bias: bool
rms_norm_eps: float
vocab_size: int
num_key_value_heads: int
partial_rotary_factor: float
rope_theta: float
rope_traditional: bool = True
max_position_embeddings: int = 65536
@classmethod
def from_dict(cls, params):
return cls(
**{
k: v
for k, v in params.items()
if k in inspect.signature(cls).parameters
}
)
class Glm4MLP(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.gate_up_proj = nn.QuantizedLinear(
args.hidden_size, 2 * args.intermediate_size, bias=False
)
self.down_proj = nn.QuantizedLinear(
args.intermediate_size, args.hidden_size, bias=False
)
def __call__(self, x) -> mx.array:
x = self.gate_up_proj(x)
gate, up_states = mx.split(x, 2, axis=-1)
return self.down_proj(nn.silu(gate) * up_states)
class Glm4Attention(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.head_dim = args.hidden_size // args.num_attention_heads
self.n_heads = args.num_attention_heads
self.n_kv_heads = args.num_key_value_heads
self.scale = self.head_dim ** -0.5
bias = args.attention_bias
q_out = args.num_attention_heads * self.head_dim
kv_out = args.num_key_value_heads * self.head_dim
self.q_proj = nn.QuantizedLinear(args.hidden_size, q_out, bias=bias)
self.k_proj = nn.QuantizedLinear(args.hidden_size, kv_out, bias=bias)
self.v_proj = nn.QuantizedLinear(args.hidden_size, kv_out, bias=bias)
self.o_proj = nn.QuantizedLinear(q_out, args.hidden_size, bias=False)
self.rope = nn.RoPE(
dims=int(self.head_dim * args.partial_rotary_factor),
base=args.rope_theta,
traditional=args.rope_traditional,
)
def __call__(
self, x: mx.array, mask: Optional[mx.array] = None, cache: Optional[Any] = None
) -> mx.array:
B, L, D = x.shape
queries, keys, values = self.q_proj(x), self.k_proj(x), self.v_proj(x)
queries = queries.reshape(B, L, self.n_heads, -1).transpose(0, 2, 1, 3)
keys = keys.reshape(B, L, self.n_kv_heads, -1).transpose(0, 2, 1, 3)
values = values.reshape(B, L, self.n_kv_heads, -1).transpose(0, 2, 1, 3)
if cache is not None:
queries = self.rope(queries, offset=cache.offset)
keys = self.rope(keys, offset=cache.offset)
keys, values = cache.update_and_fetch(keys, values)
else:
queries = self.rope(queries)
keys = self.rope(keys)
output = scaled_dot_product_attention(
queries, keys, values, cache=cache, scale=self.scale, mask=mask
)
output = output.transpose(0, 2, 1, 3).reshape(B, L, -1)
return self.o_proj(output)
class Glm4DecoderLayer(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.self_attn = Glm4Attention(args=args)
self.mlp = Glm4MLP(args)
self.input_layernorm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
self.post_attention_layernorm = nn.RMSNorm(
args.hidden_size, eps=args.rms_norm_eps
)
self.post_self_attn_layernorm = nn.RMSNorm(
args.hidden_size, eps=args.rms_norm_eps
)
self.post_mlp_layernorm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
def __call__(
self, x: mx.array, mask: Optional[mx.array] = None, cache: Optional[Any] = None
) -> mx.array:
x = x + self.post_self_attn_layernorm(
self.self_attn(self.input_layernorm(x), mask, cache)
)
residual = x
x = (
self.post_mlp_layernorm(self.mlp(self.post_attention_layernorm(x)))
+ residual
)
return x
class Glm4Model(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.embed_tokens = nn.QuantizedEmbedding(args.vocab_size, args.hidden_size)
self.layers = [
Glm4DecoderLayer(args=args) for _ in range(args.num_hidden_layers)
]
self.norm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
def __call__(
self,
inputs: mx.array,
mask: Optional[mx.array] = None,
cache: Optional[Any] = None,
inputs_embeds: Optional[mx.array] = None,
):
if inputs_embeds is not None:
h = inputs_embeds
else:
h = self.embed_tokens(inputs)
if mask is None:
mask = create_attention_mask(h, cache)
if cache is None:
cache = [None] * len(self.layers)
for layer, c in zip(self.layers, cache):
h = layer(h, mask, cache=c)
return self.norm(h)
class LanguageModel(nn.Module):
def __init__(self, config: TextConfig):
super().__init__()
self.config = config
self.model_type = config.model_type
self.model = Glm4Model(config)
self.lm_head = nn.QuantizedLinear(config.hidden_size, config.vocab_size, bias=False)
def __call__(
self,
inputs: mx.array,
inputs_embeds: Optional[mx.array] = None,
mask: Optional[mx.array] = None,
cache=None,
):
out = self.model(inputs, inputs_embeds=inputs_embeds, mask=mask, cache=cache)
out = self.lm_head(out)
# --- THIS IS THE FIX ---
# Return a consistent object type
return CausalLMOutput(logits=out)
@property
def layers(self):
return self.model.layers
#Save and Exit
glmv4.py
nano glmv4.py
#in the file glmv4.py
import inspect
from dataclasses import dataclass
from typing import Any, Optional, Dict, List, Tuple
import mlx.core as mx
import mlx.nn as nn
from ..base import (
create_attention_mask,
scaled_dot_product_attention,
)
# Define the complete output class with all optional attributes the generator might check for.
@dataclass
class CausalLMOutput:
logits: mx.array
cross_attention_states: Optional[Tuple] = None
encoder_outputs: Optional[Tuple] = None
hidden_states: Optional[Tuple] = None
attentions: Optional[Tuple] = None
@dataclass
class TextConfig:
model_type: str
hidden_size: int
num_hidden_layers: int
intermediate_size: int
num_attention_heads: int
attention_bias: bool
rms_norm_eps: float
vocab_size: int
num_key_value_heads: int
partial_rotary_factor: float
rope_theta: float
rope_traditional: bool = True
max_position_embeddings: int = 65536
@classmethod
def from_dict(cls, params):
return cls(
**{
k: v
for k, v in params.items()
if k in inspect.signature(cls).parameters
}
)
class Glm4MLP(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.gate_up_proj = nn.QuantizedLinear(
args.hidden_size, 2 * args.intermediate_size, bias=False
)
self.down_proj = nn.QuantizedLinear(
args.intermediate_size, args.hidden_size, bias=False
)
def __call__(self, x) -> mx.array:
x = self.gate_up_proj(x)
gate, up_states = mx.split(x, 2, axis=-1)
return self.down_proj(nn.silu(gate) * up_states)
class Glm4Attention(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.head_dim = args.hidden_size // args.num_attention_heads
self.n_heads = args.num_attention_heads
self.n_kv_heads = args.num_key_value_heads
self.scale = self.head_dim ** -0.5
bias = args.attention_bias
q_out = args.num_attention_heads * self.head_dim
kv_out = args.num_key_value_heads * self.head_dim
self.q_proj = nn.QuantizedLinear(args.hidden_size, q_out, bias=bias)
self.k_proj = nn.QuantizedLinear(args.hidden_size, kv_out, bias=bias)
self.v_proj = nn.QuantizedLinear(args.hidden_size, kv_out, bias=bias)
self.o_proj = nn.QuantizedLinear(q_out, args.hidden_size, bias=False)
self.rope = nn.RoPE(
dims=int(self.head_dim * args.partial_rotary_factor),
base=args.rope_theta,
traditional=args.rope_traditional,
)
def __call__(
self, x: mx.array, mask: Optional[mx.array] = None, cache: Optional[Any] = None
) -> mx.array:
B, L, D = x.shape
queries, keys, values = self.q_proj(x), self.k_proj(x), self.v_proj(x)
queries = queries.reshape(B, L, self.n_heads, -1).transpose(0, 2, 1, 3)
keys = keys.reshape(B, L, self.n_kv_heads, -1).transpose(0, 2, 1, 3)
values = values.reshape(B, L, self.n_kv_heads, -1).transpose(0, 2, 1, 3)
if cache is not None:
queries = self.rope(queries, offset=cache.offset)
keys = self.rope(keys, offset=cache.offset)
keys, values = cache.update_and_fetch(keys, values)
else:
queries = self.rope(queries)
keys = self.rope(keys)
output = scaled_dot_product_attention(
queries, keys, values, cache=cache, scale=self.scale, mask=mask
)
output = output.transpose(0, 2, 1, 3).reshape(B, L, -1)
return self.o_proj(output)
class Glm4DecoderLayer(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.self_attn = Glm4Attention(args=args)
self.mlp = Glm4MLP(args)
self.input_layernorm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
self.post_attention_layernorm = nn.RMSNorm(
args.hidden_size, eps=args.rms_norm_eps
)
self.post_self_attn_layernorm = nn.RMSNorm(
args.hidden_size, eps=args.rms_norm_eps
)
self.post_mlp_layernorm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
def __call__(
self, x: mx.array, mask: Optional[mx.array] = None, cache: Optional[Any] = None
) -> mx.array:
x = x + self.post_self_attn_layernorm(
self.self_attn(self.input_layernorm(x), mask, cache)
)
residual = x
x = (
self.post_mlp_layernorm(self.mlp(self.post_attention_layernorm(x)))
+ residual
)
return x
class Glm4Model(nn.Module):
def __init__(self, args: TextConfig):
super().__init__()
self.embed_tokens = nn.QuantizedEmbedding(args.vocab_size, args.hidden_size)
self.layers = [
Glm4DecoderLayer(args=args) for _ in range(args.num_hidden_layers)
]
self.norm = nn.RMSNorm(args.hidden_size, eps=args.rms_norm_eps)
def __call__(
self,
inputs: mx.array,
mask: Optional[mx.array] = None,
cache: Optional[Any] = None,
inputs_embeds: Optional[mx.array] = None,
):
if inputs_embeds is not None:
h = inputs_embeds
else:
h = self.embed_tokens(inputs)
if mask is None:
mask = create_attention_mask(h, cache)
if cache is None:
cache = [None] * len(self.layers)
for layer, c in zip(self.layers, cache):
h = layer(h, mask, cache=c)
return self.norm(h)
class LanguageModel(nn.Module):
def __init__(self, config: TextConfig):
super().__init__()
self.config = config
self.model_type = config.model_type
📄 許可證
本項目採用 MIT 許可證。
屬性 | 詳情 |
---|---|
模型類型 | 文本生成 |
基礎模型 | THUDM/GLM-4.1V-9B-Thinking |
庫名稱 | mlx |
標籤 | 推理、mlx |
Clip Vit Large Patch14 336
基於Vision Transformer架構的大規模視覺語言預訓練模型,支持圖像與文本的跨模態理解
文本生成圖像
Transformers

C
openai
5.9M
241
Fashion Clip
MIT
FashionCLIP是基於CLIP開發的視覺語言模型,專門針對時尚領域進行微調,能夠生成通用產品表徵。
文本生成圖像
Transformers 英語

F
patrickjohncyh
3.8M
222
Gemma 3 1b It
Gemma 3是Google推出的輕量級先進開放模型系列,基於與Gemini模型相同的研究和技術構建。該模型是多模態模型,能夠處理文本和圖像輸入並生成文本輸出。
文本生成圖像
Transformers

G
google
2.1M
347
Blip Vqa Base
Bsd-3-clause
BLIP是一個統一的視覺語言預訓練框架,擅長視覺問答任務,通過語言-圖像聯合訓練實現多模態理解與生成能力
文本生成圖像
Transformers

B
Salesforce
1.9M
154
CLIP ViT H 14 Laion2b S32b B79k
MIT
基於OpenCLIP框架在LAION-2B英文數據集上訓練的視覺-語言模型,支持零樣本圖像分類和跨模態檢索任務
文本生成圖像
Safetensors
C
laion
1.8M
368
CLIP ViT B 32 Laion2b S34b B79k
MIT
基於OpenCLIP框架在LAION-2B英語子集上訓練的視覺-語言模型,支持零樣本圖像分類和跨模態檢索
文本生成圖像
Safetensors
C
laion
1.1M
112
Pickscore V1
PickScore v1 是一個針對文本生成圖像的評分函數,可用於預測人類偏好、評估模型性能和圖像排序等任務。
文本生成圖像
Transformers

P
yuvalkirstain
1.1M
44
Owlv2 Base Patch16 Ensemble
Apache-2.0
OWLv2是一種零樣本文本條件目標檢測模型,可通過文本查詢在圖像中定位對象。
文本生成圖像
Transformers

O
google
932.80k
99
Llama 3.2 11B Vision Instruct
Llama 3.2 是 Meta 發佈的多語言多模態大型語言模型,支持圖像文本到文本的轉換任務,具備強大的跨模態理解能力。
文本生成圖像
Transformers 支持多種語言

L
meta-llama
784.19k
1,424
Owlvit Base Patch32
Apache-2.0
OWL-ViT是一個零樣本文本條件目標檢測模型,可以通過文本查詢搜索圖像中的對象,無需特定類別的訓練數據。
文本生成圖像
Transformers

O
google
764.95k
129
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98