🚀 Gorilla OpenFunctions
Gorilla OpenFunctions将大语言模型(LLM)的聊天完成功能进行了扩展,能够根据自然语言指令和API上下文生成可执行的API调用。
🚀 立即在 Colab 中试用
📣 请在我们的 OpenFunctions博客发布文章 中了解更多信息
🚀 快速开始
✨ 主要特性
Gorilla OpenFunctions可将大语言模型(LLM)的聊天完成功能扩展,根据自然语言指令和API上下文生成可执行的API调用。
📦 安装指南
OpenFunctions与OpenAI Functions兼容,你可以使用以下命令进行安装:
!pip install openai==0.28.1
💻 使用示例
基础用法(托管服务)
- 安装依赖:
!pip install openai==0.28.1
- 指向Gorilla托管服务器:
import openai
def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
openai.api_key = "EMPTY"
openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
try:
completion = openai.ChatCompletion.create(
model="gorilla-openfunctions-v0",
temperature=0.0,
messages=[{"role": "user", "content": prompt}],
functions=functions,
)
return completion.choices[0].message.content
except Exception as e:
print(e, model, prompt)
- 传递用户参数和函数列表,Gorilla OpenFunctions将返回格式正确的JSON:
query = "Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes"
functions = [
{
"name": "Uber Carpool",
"api_name": "uber.ride",
"description": "Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters",
"parameters": [{"name": "loc", "description": "location of the starting place of the uber ride"}, {"name":"type", "enum": ["plus", "comfort", "black"], "description": "types of uber ride user is ordering"}, {"name": "time", "description": "the amount of time in minutes the customer is willing to wait"}]
}
]
get_gorilla_response(query, functions=functions)
- 预期输出:
uber.ride(loc="berkeley", type="plus", time=10)
高级用法(本地运行)
import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
def get_prompt(user_query: str, functions: list = []) -> str:
"""
Generates a conversation prompt based on the user's query and a list of functions.
Parameters:
- user_query (str): The user's query.
- functions (list): A list of functions to include in the prompt.
Returns:
- str: The formatted conversation prompt.
"""
if len(functions) == 0:
return f"USER: <<question>> {user_query}\nASSISTANT: "
functions_string = json.dumps(functions)
return f"USER: <<question>> {user_query} <<function>> {functions_string}\nASSISTANT: "
device : str = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id : str = "gorilla-llm/gorilla-openfunctions-v0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True)
model.to(device)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=128,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
)
query: str = "Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes"
functions = [
{
"name": "Uber Carpool",
"api_name": "uber.ride",
"description": "Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters",
"parameters": [
{"name": "loc", "description": "Location of the starting place of the Uber ride"},
{"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber ride user is ordering"},
{"name": "time", "description": "The amount of time in minutes the customer is willing to wait"}
]
}
]
prompt = get_prompt(query, functions=functions)
output = pipe(prompt)
print(output)
📚 详细文档
可用模型
属性 |
详情 |
模型类型 |
gorilla-openfunctions-v0:给定一个函数和用户意图,返回带有正确参数的格式正确的JSON;gorilla-openfunctions-v1:支持并行函数,并且可以在多个函数之间进行选择 |
📄 许可证
所有模型以及用于训练模型的数据均遵循Apache 2.0许可证发布。
🔗 贡献说明
Gorilla是加州大学伯克利分校发起的一个开源项目,我们欢迎各位贡献者。如果您有任何意见、批评或问题,请通过电子邮件与我们联系。有关该项目的更多信息,请访问 https://gorilla.cs.berkeley.edu/。