文本是《工具配置(共38篇)》专题的第 38 篇。阅读本文前,建议先阅读前面的文章:
- 1.沉浸式翻译插件深度解析:从基础配置到高级定制与跨平台应用
- 2.沉浸式翻译:智能双语翻译工具,提升阅读体验与翻译精准度
- 3.ChatBox 配置指南:多平台AI对话工具,支持API Key与本地模型配置
- 4.Continue 插件安装与配置指南:JetBrains IDE 的 AI 辅助利器
- 5.Cursor 安装与配置全攻略:AI 驱动的智能编程助手
- 6.CherryStudio:跨平台AI模型管理与对话工具安装与配置全攻略
- 7.Dify:开源低代码 AI 应用平台 | 一站式构建与部署
- 8.AingDesk | 开源跨平台 AI 桌面客户端 · Windows / macOS / Docker 一站式部署
- 9.VS Code 与 Cline 插件安装及 AI 对话扩展使用指南
- 10.Zed 轻量级现代代码编辑器:性能、协作与 AI 集成
- 11.DeepChat 免费桌面智能助手|多模型接入·多模态交互·隐私安全
- 12.Void AI编辑器完全指南:免费开源Cursor替代品安装配置教程 | AI编程工具
- 13.探索前沿AI对话:LibreChat部署的深度洞察与最佳实践
- 14.Sider 配置AI模型指南
- 15.Cursor AI代码编辑器完整使用指南 – 下载安装配置教程2025
- 16.Trae AI 安装与使用教程 | 最强 AI 编程助手配置指南
- 17.2025最新IntelliJ IDEA 安装与使用全指南:版本选择、插件配置与AI助手集成
- 18.Glarity浏览器插件完整指南:免费开源AI网页摘要与翻译助手
- 19.Claude Code CLI 安装与配置完整教程 | 支持 Windows 与 macOS 的 AI 编程助手
- 20.91协商写作平台
- 21.Claude-Python示例代码
- 22.OpenAI-image-Python示例代码
- 23.Gemini-Python示例代码
- 24.OpenAI-Java示例代码
- 25.Rerank-python代码配置
- 26.Python分析文件代码示例
- 27.Python配置openAI使用音视频图片对话
- 28.OpenAI-Java示例代码
- 29.Claude-Java示例代码
- 30.whisper-1-Python示例代码
- 31.从4.0到4.5:Claude最新版本对比评测,这些场景提升最明显
- 32.dalle-3-Python示例代码
- 33.doubao-Python示例代码
- 34.gemini-image-Python示例代码
- 35.gpt-image-1-Python示例代码
- 36.Gemini多场景-Java代码示例
- 37.# 🚀 四大AI巨头巅峰对决:GPT-5 vs Claude 4.5 vs Gemini 2.5 Pro vs DeepSeek V3.1
Gemini多场景-Python代码示例
资源准备
- API Key:此项配置填写在一步API官网创建API令牌,一键直达API令牌创建页面
- 创建API令牌步骤请参考API Key的获取和使用
- API Host:此项配置填写https://yibuapi.com/v1
- 查看支持的模型请参在这里复制模型在线查询
如使用 yibuapi.com 中转:将
base_url改为https://yibuapi.com,并使用你在 yibuapi 控制台创建的 API Key。
快速开始:仅需三行改动
变更点(仅三行):
api_key="sk-***": 您生成的秘钥。base_url="https://yibuapi.com":model="gemini-2.0-flash"(或"gemini-2.5-flash"等):选择兼容的 Gemini 模型。
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com"
)
response = client.chat.completions.create(
model="gemini-2.5-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain to me how AI works"}
]
)
print(response.choices[0].message)
在 yibuapi.com 使用(可选)
client = OpenAI(api_key="sk-***", base_url="https://yibuapi.com/v1")
推理控制(reasoning_effort / thinking budget)
Gemini 2.5 系列提供思考/推理控制。
- OpenAI 兼容字段:
reasoning_effort∈{"low","medium","high","none"} - 或 使用
extra_body.google.thinking_config.thinking_budget(单位:token)精细控制 - 注意:
reasoning_effort与thinking_budget不能同时使用
方式一:reasoning_effort
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com/"
)
response = client.chat.completions.create(
model="gemini-2.5-flash",
reasoning_effort="low",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain to me how AI works"}
]
)
print(response.choices[0].message)
方式二:thinking_budget(并可返回思考摘要)
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com/"
)
response = client.chat.completions.create(
model="gemini-2.5-flash",
messages=[{"role": "user", "content": "Explain to me how AI works"}],
extra_body={
"extra_body": {
"google": {
"thinking_config": {
"thinking_budget": 800,
"include_thoughts": True
}
}
}
}
)
print(response.choices[0].message)
流式输出(SSE)
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com/"
)
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta)
函数调用(Tools / Function Calling)
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com/"
)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city and state, e.g. Chicago, IL"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}
]
messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}]
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=messages,
tools=tools,
tool_choice="auto"
)
print(response)
图片理解(多模态:image_url/base64)
import base64
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com/"
)
def encode_image(image_path: str) -> str:
with open(image_path, "rb") as f:
return base64.b64encode(f.read()).decode("utf-8")
b64 = encode_image("Path/to/agi/image.jpeg")
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What is in this image?"},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{b64}"}}
]
}
]
)
print(response.choices[0])
生成图片(付费层级)
import base64
from io import BytesIO
from PIL import Image
from openai import OpenAI
client = OpenAI(
api_key="sk-***",
base_url="https://yibuapi.com/",
)
response = client.images.generate(
model="imagen-3.0-generate-002",
prompt="a portrait of a sheepadoodle wearing a cape",
response_format="b64_json",
n=1,
)
for item in response.data:
img = Image.open(BytesIO(base64.b64decode(item.b64_json)))
img.show()
