Cephalo Gemma 3 4b It 04 15 2025
C
Cephalo Gemma 3 4b It 04 15 2025
由lamm-mit開發
Cephalo-Gemma-3-4b 是一個跨模態視覺語言模型,專注於仿生材料分析與設計。
下載量 25
發布時間 : 4/15/2025
模型概述
該模型結合了視覺和語言處理能力,能夠分析圖像並提供詳細的材料科學和生物學相關解釋,支持JSON格式的結構化輸出。
模型特點
跨模態理解
能夠同時處理圖像和文本輸入,實現跨模態的信息理解和生成。
結構化輸出
支持以JSON格式輸出結構化分析結果,便於後續處理和應用集成。
專業領域支持
針對材料科學和生物學等專業領域進行了優化,提供詳細且準確的分析。
模型能力
圖像分析
文本生成
結構化數據輸出
跨模態推理
使用案例
材料科學
3D打印材料分析
分析3D打印模型與實物對比,評估打印精度和材料特性。
提供詳細的幾何設計和打印保真度分析。
生物學
生物材料特性提取
從圖像中提取生物材料(如蜘蛛絲)的詳細特性。
生成包含材料類型、設計特點及用途的JSON格式報告。
🚀 Cephalo - Gemma - 3 - 4b
Cephalo - Gemma - 3 - 4b是一個多模態視覺語言模型,可用於生物啟發材料的分析和設計。通過輸入圖像和文本提示,它能對圖像內容進行詳細分析,並以文本或JSON格式輸出結果。
🚀 快速開始
加載模型
import torch
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from transformers.image_utils import load_image
from PIL import Image as PILImage
ckpt = "lamm-mit/Cephalo-Gemma-3-4b-it-04-15-2025"
model = Gemma3ForConditionalGeneration.from_pretrained(
ckpt, device_map="auto", torch_dtype=torch.bfloat16,
)
processor = AutoProcessor.from_pretrained(ckpt)
💻 使用示例
基礎用法
image=PILImage.open(f'./spiderweb.png').convert("RGB")
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a materials scientist."}
],
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "What does this image show? Provide a detailed analysis."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
generation = model.generate(**inputs, max_new_tokens=512, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
輸出:
The image shows a comparison between a 3D model of a structure and its physical 3D printed counterpart.
The top part of the image displays a 3D model of a structure, which is a complex geometric design with multiple interconnected lines and angles. The model is likely created using computer-aided design (CAD) software, which allows for precise and detailed representation of the structure.
The bottom part of the image shows the physical 3D printed version of the same structure. The printed object is a tangible representation of the CAD model, with the same geometric design and intricate details. The printed object is placed on a surface, which could be a table or a platform, and is illuminated to highlight its three-dimensional form.
The comparison between the 3D model and the physical 3D printed object demonstrates the accuracy and fidelity of the 3D printing process. The printed object closely resembles the CAD model, indicating that the 3D printing technology can accurately reproduce complex geometric designs.
The results shown in the image highlight the potential of 3D printing for creating complex and intricate structures with high precision and accuracy. This technology has various applications in fields such as manufacturing, engineering, and design, where the ability to create precise and detailed objects is crucial.
高級用法
image=PILImage.open(f'./scan.jpg').convert("RGB")
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a biologist."}
],
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Extract details of the material, type, design features, and use cases. Respond in JSON."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
generation = model.generate(**inputs, max_new_tokens=512, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
輸出:
Here's a detailed description of the material, type, design features, and use cases, presented in JSON format:
{
"material": "Spider silk",
"type": "Natural protein fiber",
"design_features": [
"High tensile strength and elasticity",
"Lightweight and flexible",
"Self-healing properties",
"Biocompatible and biodegradable"
],
"use_cases": [
"Medical applications: sutures, scaffolds for tissue engineering, drug delivery systems",
"Aerospace: lightweight composites for aircraft and spacecraft",
"Protective gear: bulletproof vests, helmets, and body armor",
"Industrial applications: high-performance fibers for ropes, cables, and nets",
"Environmental applications: biodegradable packaging and textiles"
],
"properties": {
"tensile_strength": "Up to 1.5 GPa",
"elastic_modulus": "1.5 GPa",
"elongation_at_break": "Up to 1500%"
},
"source": "Spider silk is produced by spiders and is composed of proteins such as spidroin, which are arranged in a hierarchical structure to form the silk fibers."
}
**JSON字段說明**:
- **material**: 使用的主要材料,即蜘蛛絲。
- **type**: 材料類型,為天然蛋白質纖維。
- **design_features**: 材料的關鍵特性,包括高強度、彈性和自愈特性。
- **use_cases**: 蜘蛛絲的各種應用,從醫療到工業領域。
- **properties**: 蜘蛛絲的物理特性,如拉伸強度、彈性模量和斷裂伸長率。
- **source**: 簡要描述蜘蛛絲的來源及其組成。
此JSON提供了蜘蛛絲的全面概述,突出了其獨特的特性和潛在應用。
📚 詳細文檔
引用
@article{Buehler_Cephalo_2024_journal,
title={Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design},
author={Markus J. Buehler},
journal={Advanced Functional Materials},
year={2024},
volume={34},
issue={49},
doi={2409531},
url={https://advanced.onlinelibrary.wiley.com/doi/full/10.1002/adfm.202409531}
}
Clip Vit Large Patch14
CLIP是由OpenAI開發的視覺-語言模型,通過對比學習將圖像和文本映射到共享的嵌入空間,支持零樣本圖像分類
圖像生成文本
C
openai
44.7M
1,710
Clip Vit Base Patch32
CLIP是由OpenAI開發的多模態模型,能夠理解圖像和文本之間的關係,支持零樣本圖像分類任務。
圖像生成文本
C
openai
14.0M
666
Siglip So400m Patch14 384
Apache-2.0
SigLIP是基於WebLi數據集預訓練的視覺語言模型,採用改進的sigmoid損失函數,優化了圖像-文本匹配任務。
圖像生成文本
Transformers

S
google
6.1M
526
Clip Vit Base Patch16
CLIP是由OpenAI開發的多模態模型,通過對比學習將圖像和文本映射到共享的嵌入空間,實現零樣本圖像分類能力。
圖像生成文本
C
openai
4.6M
119
Blip Image Captioning Base
Bsd-3-clause
BLIP是一個先進的視覺-語言預訓練模型,擅長圖像描述生成任務,支持條件式和非條件式文本生成。
圖像生成文本
Transformers

B
Salesforce
2.8M
688
Blip Image Captioning Large
Bsd-3-clause
BLIP是一個統一的視覺-語言預訓練框架,擅長圖像描述生成任務,支持條件式和無條件式圖像描述生成。
圖像生成文本
Transformers

B
Salesforce
2.5M
1,312
Openvla 7b
MIT
OpenVLA 7B是一個基於Open X-Embodiment數據集訓練的開源視覺-語言-動作模型,能夠根據語言指令和攝像頭圖像生成機器人動作。
圖像生成文本
Transformers 英語

O
openvla
1.7M
108
Llava V1.5 7b
LLaVA 是一款開源多模態聊天機器人,基於 LLaMA/Vicuna 微調,支持圖文交互。
圖像生成文本
Transformers

L
liuhaotian
1.4M
448
Vit Gpt2 Image Captioning
Apache-2.0
這是一個基於ViT和GPT2架構的圖像描述生成模型,能夠為輸入圖像生成自然語言描述。
圖像生成文本
Transformers

V
nlpconnect
939.88k
887
Blip2 Opt 2.7b
MIT
BLIP-2是一個視覺語言模型,結合了圖像編碼器和大型語言模型,用於圖像到文本的生成任務。
圖像生成文本
Transformers 英語

B
Salesforce
867.78k
359
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98