🚀 零樣本目標檢測推理端點項目
本項目是對 omlab/omdet-turbo-swin-tiny-hf 的復刻,旨在為 🤗 Inference Endpoints 實現 零樣本目標檢測
的 自定義
任務。該項目解決了在特定推理端點上進行零樣本目標檢測的問題,通過自定義處理程序,能夠靈活地對輸入圖像進行目標檢測。
🚀 快速開始
本倉庫為 🤗 Inference Endpoints 實現了 零樣本目標檢測
的 自定義
任務。自定義處理程序的代碼位於 handler.py。
若要將此模型部署為推理端點,你需要選擇 自定義
任務以使用 handler.py
文件。倉庫中包含 requirements.txt
文件,用於下載 timm
庫。
📦 安裝指南
倉庫中包含 requirements.txt
文件,可使用以下命令下載 timm
庫:
pip install -r requirements.txt
💻 使用示例
基礎用法
以下是請求所需的有效負載示例:
{
"inputs": {
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....",
"candidates": ["broken curb", "broken road", "broken road sign", "broken sidewalk"]
}
}
以下是使用 Python 和 requests
庫發送請求的示例:
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None, candidates: List[str] = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs": {"image": b64.decode("utf-8"), "candidates": candidates}}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="image/brokencurb.jpg", candidates=["broken curb", "broken road", "broken road sign", "broken sidewalk"]
)
print(json.dumps(prediction, indent=2))
預期輸出:
{
"boxes": [
[
1.919342041015625,
231.1556396484375,
1011.4019775390625,
680.3773193359375
],
[
610.9949951171875,
397.6180419921875,
1019.9259033203125,
510.8144226074219
],
[
1.919342041015625,
231.1556396484375,
1011.4019775390625,
680.3773193359375
],
[
786.1240234375,
68.618896484375,
916.1265869140625,
225.0513458251953
]
],
"scores": [
0.4329715967178345,
0.4215811491012573,
0.3389397859573364,
0.3133399784564972
],
"candidates": [
"broken sidewalk",
"broken road sign",
"broken road",
"broken road sign"
]
}
其中,邊界框的結構為 {x_min, y_min, x_max, y_max}
。
高級用法
若要可視化請求結果,可使用以下代碼:
prediction = predict(
path_to_image="image/cat_and_remote.jpg", candidates=["cat", "remote", "pot hole"]
)
import matplotlib.pyplot as plt
import matplotlib.patches as patches
with open("image/cat_and_remote.jpg", "rb") as i:
image = plt.imread(i)
fig, ax = plt.subplots(1)
ax.imshow(image)
for score, class_name, box in zip(
prediction["scores"], prediction["candidates"], prediction["boxes"]
):
rect = patches.Rectangle([int(box[0]), int(box[1])], int(box[2] - box[0]), int(box[3] - box[1]), linewidth=1, edgecolor='r', facecolor='none')
ax.add_patch(rect)
ax.text(int(box[0]), int(box[1]), str(round(score, 2)) + " " + str(class_name), color='white', fontsize=6, bbox=dict(facecolor='red', alpha=0.5))
plt.savefig('image_result/cat_and_remote_with_bboxes_zero_shot.jpeg')
輸入圖像示例:
輸入圖像
輸出圖像示例:
輸出圖像
📄 許可證
本項目採用 Apache-2.0
許可證。
🔗 參考來源
本項目對 Hugging Face 推理端點的適配靈感來源於 @philschmid 在 philschmid/clip-zero-shot-image-classification 上的工作。