🚀 Gorilla OpenFunctions
Gorilla OpenFunctions將大語言模型(LLM)的聊天完成功能進行了擴展,能夠根據自然語言指令和API上下文生成可執行的API調用。
🚀 立即在 Colab 中試用
📣 請在我們的 OpenFunctions博客發佈文章 中瞭解更多信息
🚀 快速開始
✨ 主要特性
Gorilla OpenFunctions可將大語言模型(LLM)的聊天完成功能擴展,根據自然語言指令和API上下文生成可執行的API調用。
📦 安裝指南
OpenFunctions與OpenAI Functions兼容,你可以使用以下命令進行安裝:
!pip install openai==0.28.1
💻 使用示例
基礎用法(託管服務)
- 安裝依賴:
!pip install openai==0.28.1
- 指向Gorilla託管服務器:
import openai
def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
openai.api_key = "EMPTY"
openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
try:
completion = openai.ChatCompletion.create(
model="gorilla-openfunctions-v0",
temperature=0.0,
messages=[{"role": "user", "content": prompt}],
functions=functions,
)
return completion.choices[0].message.content
except Exception as e:
print(e, model, prompt)
- 傳遞用戶參數和函數列表,Gorilla OpenFunctions將返回格式正確的JSON:
query = "Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes"
functions = [
{
"name": "Uber Carpool",
"api_name": "uber.ride",
"description": "Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters",
"parameters": [{"name": "loc", "description": "location of the starting place of the uber ride"}, {"name":"type", "enum": ["plus", "comfort", "black"], "description": "types of uber ride user is ordering"}, {"name": "time", "description": "the amount of time in minutes the customer is willing to wait"}]
}
]
get_gorilla_response(query, functions=functions)
- 預期輸出:
uber.ride(loc="berkeley", type="plus", time=10)
高級用法(本地運行)
import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
def get_prompt(user_query: str, functions: list = []) -> str:
"""
Generates a conversation prompt based on the user's query and a list of functions.
Parameters:
- user_query (str): The user's query.
- functions (list): A list of functions to include in the prompt.
Returns:
- str: The formatted conversation prompt.
"""
if len(functions) == 0:
return f"USER: <<question>> {user_query}\nASSISTANT: "
functions_string = json.dumps(functions)
return f"USER: <<question>> {user_query} <<function>> {functions_string}\nASSISTANT: "
device : str = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id : str = "gorilla-llm/gorilla-openfunctions-v0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True)
model.to(device)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=128,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
)
query: str = "Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes"
functions = [
{
"name": "Uber Carpool",
"api_name": "uber.ride",
"description": "Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters",
"parameters": [
{"name": "loc", "description": "Location of the starting place of the Uber ride"},
{"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber ride user is ordering"},
{"name": "time", "description": "The amount of time in minutes the customer is willing to wait"}
]
}
]
prompt = get_prompt(query, functions=functions)
output = pipe(prompt)
print(output)
📚 詳細文檔
可用模型
屬性 |
詳情 |
模型類型 |
gorilla-openfunctions-v0:給定一個函數和用戶意圖,返回帶有正確參數的格式正確的JSON;gorilla-openfunctions-v1:支持並行函數,並且可以在多個函數之間進行選擇 |
📄 許可證
所有模型以及用於訓練模型的數據均遵循Apache 2.0許可證發佈。
🔗 貢獻說明
Gorilla是加州大學伯克利分校發起的一個開源項目,我們歡迎各位貢獻者。如果您有任何意見、批評或問題,請通過電子郵件與我們聯繫。有關該項目的更多信息,請訪問 https://gorilla.cs.berkeley.edu/。