🚀 零样本目标检测推理端点项目
本项目是对 omlab/omdet-turbo-swin-tiny-hf 的复刻,旨在为 🤗 Inference Endpoints 实现 零样本目标检测
的 自定义
任务。该项目解决了在特定推理端点上进行零样本目标检测的问题,通过自定义处理程序,能够灵活地对输入图像进行目标检测。
🚀 快速开始
本仓库为 🤗 Inference Endpoints 实现了 零样本目标检测
的 自定义
任务。自定义处理程序的代码位于 handler.py。
若要将此模型部署为推理端点,你需要选择 自定义
任务以使用 handler.py
文件。仓库中包含 requirements.txt
文件,用于下载 timm
库。
📦 安装指南
仓库中包含 requirements.txt
文件,可使用以下命令下载 timm
库:
pip install -r requirements.txt
💻 使用示例
基础用法
以下是请求所需的有效负载示例:
{
"inputs": {
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....",
"candidates": ["broken curb", "broken road", "broken road sign", "broken sidewalk"]
}
}
以下是使用 Python 和 requests
库发送请求的示例:
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None, candidates: List[str] = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs": {"image": b64.decode("utf-8"), "candidates": candidates}}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="image/brokencurb.jpg", candidates=["broken curb", "broken road", "broken road sign", "broken sidewalk"]
)
print(json.dumps(prediction, indent=2))
预期输出:
{
"boxes": [
[
1.919342041015625,
231.1556396484375,
1011.4019775390625,
680.3773193359375
],
[
610.9949951171875,
397.6180419921875,
1019.9259033203125,
510.8144226074219
],
[
1.919342041015625,
231.1556396484375,
1011.4019775390625,
680.3773193359375
],
[
786.1240234375,
68.618896484375,
916.1265869140625,
225.0513458251953
]
],
"scores": [
0.4329715967178345,
0.4215811491012573,
0.3389397859573364,
0.3133399784564972
],
"candidates": [
"broken sidewalk",
"broken road sign",
"broken road",
"broken road sign"
]
}
其中,边界框的结构为 {x_min, y_min, x_max, y_max}
。
高级用法
若要可视化请求结果,可使用以下代码:
prediction = predict(
path_to_image="image/cat_and_remote.jpg", candidates=["cat", "remote", "pot hole"]
)
import matplotlib.pyplot as plt
import matplotlib.patches as patches
with open("image/cat_and_remote.jpg", "rb") as i:
image = plt.imread(i)
fig, ax = plt.subplots(1)
ax.imshow(image)
for score, class_name, box in zip(
prediction["scores"], prediction["candidates"], prediction["boxes"]
):
rect = patches.Rectangle([int(box[0]), int(box[1])], int(box[2] - box[0]), int(box[3] - box[1]), linewidth=1, edgecolor='r', facecolor='none')
ax.add_patch(rect)
ax.text(int(box[0]), int(box[1]), str(round(score, 2)) + " " + str(class_name), color='white', fontsize=6, bbox=dict(facecolor='red', alpha=0.5))
plt.savefig('image_result/cat_and_remote_with_bboxes_zero_shot.jpeg')
输入图像示例:
输入图像
输出图像示例:
输出图像
📄 许可证
本项目采用 Apache-2.0
许可证。
🔗 参考来源
本项目对 Hugging Face 推理端点的适配灵感来源于 @philschmid 在 philschmid/clip-zero-shot-image-classification 上的工作。