Cephalo Gemma 3 4b It 04 15 2025
C
Cephalo Gemma 3 4b It 04 15 2025
由 lamm-mit 开发
Cephalo-Gemma-3-4b 是一个跨模态视觉语言模型,专注于仿生材料分析与设计。
下载量 25
发布时间 : 4/15/2025
模型简介
该模型结合了视觉和语言处理能力,能够分析图像并提供详细的材料科学和生物学相关解释,支持JSON格式的结构化输出。
模型特点
跨模态理解
能够同时处理图像和文本输入,实现跨模态的信息理解和生成。
结构化输出
支持以JSON格式输出结构化分析结果,便于后续处理和应用集成。
专业领域支持
针对材料科学和生物学等专业领域进行了优化,提供详细且准确的分析。
模型能力
图像分析
文本生成
结构化数据输出
跨模态推理
使用案例
材料科学
3D打印材料分析
分析3D打印模型与实物对比,评估打印精度和材料特性。
提供详细的几何设计和打印保真度分析。
生物学
生物材料特性提取
从图像中提取生物材料(如蜘蛛丝)的详细特性。
生成包含材料类型、设计特点及用途的JSON格式报告。
🚀 Cephalo - Gemma - 3 - 4b
Cephalo - Gemma - 3 - 4b是一个多模态视觉语言模型,可用于生物启发材料的分析和设计。通过输入图像和文本提示,它能对图像内容进行详细分析,并以文本或JSON格式输出结果。
🚀 快速开始
加载模型
import torch
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from transformers.image_utils import load_image
from PIL import Image as PILImage
ckpt = "lamm-mit/Cephalo-Gemma-3-4b-it-04-15-2025"
model = Gemma3ForConditionalGeneration.from_pretrained(
ckpt, device_map="auto", torch_dtype=torch.bfloat16,
)
processor = AutoProcessor.from_pretrained(ckpt)
💻 使用示例
基础用法
image=PILImage.open(f'./spiderweb.png').convert("RGB")
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a materials scientist."}
],
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "What does this image show? Provide a detailed analysis."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
generation = model.generate(**inputs, max_new_tokens=512, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
输出:
The image shows a comparison between a 3D model of a structure and its physical 3D printed counterpart.
The top part of the image displays a 3D model of a structure, which is a complex geometric design with multiple interconnected lines and angles. The model is likely created using computer-aided design (CAD) software, which allows for precise and detailed representation of the structure.
The bottom part of the image shows the physical 3D printed version of the same structure. The printed object is a tangible representation of the CAD model, with the same geometric design and intricate details. The printed object is placed on a surface, which could be a table or a platform, and is illuminated to highlight its three-dimensional form.
The comparison between the 3D model and the physical 3D printed object demonstrates the accuracy and fidelity of the 3D printing process. The printed object closely resembles the CAD model, indicating that the 3D printing technology can accurately reproduce complex geometric designs.
The results shown in the image highlight the potential of 3D printing for creating complex and intricate structures with high precision and accuracy. This technology has various applications in fields such as manufacturing, engineering, and design, where the ability to create precise and detailed objects is crucial.
高级用法
image=PILImage.open(f'./scan.jpg').convert("RGB")
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a biologist."}
],
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Extract details of the material, type, design features, and use cases. Respond in JSON."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
generation = model.generate(**inputs, max_new_tokens=512, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
输出:
Here's a detailed description of the material, type, design features, and use cases, presented in JSON format:
{
"material": "Spider silk",
"type": "Natural protein fiber",
"design_features": [
"High tensile strength and elasticity",
"Lightweight and flexible",
"Self-healing properties",
"Biocompatible and biodegradable"
],
"use_cases": [
"Medical applications: sutures, scaffolds for tissue engineering, drug delivery systems",
"Aerospace: lightweight composites for aircraft and spacecraft",
"Protective gear: bulletproof vests, helmets, and body armor",
"Industrial applications: high-performance fibers for ropes, cables, and nets",
"Environmental applications: biodegradable packaging and textiles"
],
"properties": {
"tensile_strength": "Up to 1.5 GPa",
"elastic_modulus": "1.5 GPa",
"elongation_at_break": "Up to 1500%"
},
"source": "Spider silk is produced by spiders and is composed of proteins such as spidroin, which are arranged in a hierarchical structure to form the silk fibers."
}
**JSON字段说明**:
- **material**: 使用的主要材料,即蜘蛛丝。
- **type**: 材料类型,为天然蛋白质纤维。
- **design_features**: 材料的关键特性,包括高强度、弹性和自愈特性。
- **use_cases**: 蜘蛛丝的各种应用,从医疗到工业领域。
- **properties**: 蜘蛛丝的物理特性,如拉伸强度、弹性模量和断裂伸长率。
- **source**: 简要描述蜘蛛丝的来源及其组成。
此JSON提供了蜘蛛丝的全面概述,突出了其独特的特性和潜在应用。
📚 详细文档
引用
@article{Buehler_Cephalo_2024_journal,
title={Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design},
author={Markus J. Buehler},
journal={Advanced Functional Materials},
year={2024},
volume={34},
issue={49},
doi={2409531},
url={https://advanced.onlinelibrary.wiley.com/doi/full/10.1002/adfm.202409531}
}
Clip Vit Large Patch14
CLIP是由OpenAI开发的视觉-语言模型,通过对比学习将图像和文本映射到共享的嵌入空间,支持零样本图像分类
图像生成文本
C
openai
44.7M
1,710
Clip Vit Base Patch32
CLIP是由OpenAI开发的多模态模型,能够理解图像和文本之间的关系,支持零样本图像分类任务。
图像生成文本
C
openai
14.0M
666
Siglip So400m Patch14 384
Apache-2.0
SigLIP是基于WebLi数据集预训练的视觉语言模型,采用改进的sigmoid损失函数,优化了图像-文本匹配任务。
图像生成文本
Transformers

S
google
6.1M
526
Clip Vit Base Patch16
CLIP是由OpenAI开发的多模态模型,通过对比学习将图像和文本映射到共享的嵌入空间,实现零样本图像分类能力。
图像生成文本
C
openai
4.6M
119
Blip Image Captioning Base
Bsd-3-clause
BLIP是一个先进的视觉-语言预训练模型,擅长图像描述生成任务,支持条件式和非条件式文本生成。
图像生成文本
Transformers

B
Salesforce
2.8M
688
Blip Image Captioning Large
Bsd-3-clause
BLIP是一个统一的视觉-语言预训练框架,擅长图像描述生成任务,支持条件式和无条件式图像描述生成。
图像生成文本
Transformers

B
Salesforce
2.5M
1,312
Openvla 7b
MIT
OpenVLA 7B是一个基于Open X-Embodiment数据集训练的开源视觉-语言-动作模型,能够根据语言指令和摄像头图像生成机器人动作。
图像生成文本
Transformers 英语

O
openvla
1.7M
108
Llava V1.5 7b
LLaVA 是一款开源多模态聊天机器人,基于 LLaMA/Vicuna 微调,支持图文交互。
图像生成文本
Transformers

L
liuhaotian
1.4M
448
Vit Gpt2 Image Captioning
Apache-2.0
这是一个基于ViT和GPT2架构的图像描述生成模型,能够为输入图像生成自然语言描述。
图像生成文本
Transformers

V
nlpconnect
939.88k
887
Blip2 Opt 2.7b
MIT
BLIP-2是一个视觉语言模型,结合了图像编码器和大型语言模型,用于图像到文本的生成任务。
图像生成文本
Transformers 英语

B
Salesforce
867.78k
359
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers 支持多种语言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers 英语

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统 中文
R
uer
2,694
98