当前位置:首页>文章>使用指南>Gemini多场景-Python代码示例

Gemini多场景-Python代码示例

文本是《工具配置(共38篇)》专题的第 38 篇。阅读本文前,建议先阅读前面的文章:

Gemini多场景-Python代码示例

资源准备

如使用 yibuapi.com 中转:将 base_url 改为 https://yibuapi.com,并使用你在 yibuapi 控制台创建的 API Key。


快速开始:仅需三行改动

变更点(仅三行):

  1. api_key="sk-***": 您生成的秘钥。
  2. base_url="https://yibuapi.com"
  3. model="gemini-2.0-flash"(或 "gemini-2.5-flash" 等):选择兼容的 Gemini 模型。
from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com"
)

response = client.chat.completions.create(
    model="gemini-2.5-flash",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain to me how AI works"}
    ]
)

print(response.choices[0].message)

在 yibuapi.com 使用(可选)

client = OpenAI(api_key="sk-***", base_url="https://yibuapi.com/v1")

推理控制(reasoning_effort / thinking budget)

Gemini 2.5 系列提供思考/推理控制。

  • OpenAI 兼容字段:reasoning_effort{"low","medium","high","none"}
  • 使用 extra_body.google.thinking_config.thinking_budget(单位:token)精细控制
  • 注意reasoning_effortthinking_budget 不能同时使用

方式一:reasoning_effort

from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com/"
)

response = client.chat.completions.create(
    model="gemini-2.5-flash",
    reasoning_effort="low",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain to me how AI works"}
    ]
)

print(response.choices[0].message)

方式二:thinking_budget(并可返回思考摘要)

from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com/"
)

response = client.chat.completions.create(
    model="gemini-2.5-flash",
    messages=[{"role": "user", "content": "Explain to me how AI works"}],
    extra_body={
      "extra_body": {
        "google": {
          "thinking_config": {
            "thinking_budget": 800,
            "include_thoughts": True
          }
        }
      }
    }
)

print(response.choices[0].message)

流式输出(SSE)

from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com/"
)

response = client.chat.completions.create(
    model="gemini-2.0-flash",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta)

函数调用(Tools / Function Calling)

from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com/"
)

tools = [
  {
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get the weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {"type": "string", "description": "The city and state, e.g. Chicago, IL"},
          "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
        },
        "required": ["location"]
      }
    }
  }
]

messages = [{"role": "user", "content": "What's the weather like in Chicago today?"}]
response = client.chat.completions.create(
  model="gemini-2.0-flash",
  messages=messages,
  tools=tools,
  tool_choice="auto"
)

print(response)

图片理解(多模态:image_url/base64)

import base64
from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com/"
)

def encode_image(image_path: str) -> str:
    with open(image_path, "rb") as f:
        return base64.b64encode(f.read()).decode("utf-8")

b64 = encode_image("Path/to/agi/image.jpeg")

response = client.chat.completions.create(
  model="gemini-2.0-flash",
  messages=[
    {
      "role": "user",
      "content": [
        {"type": "text", "text": "What is in this image?"},
        {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{b64}"}}
      ]
    }
  ]
)

print(response.choices[0])

生成图片(付费层级)

import base64
from io import BytesIO
from PIL import Image
from openai import OpenAI

client = OpenAI(
    api_key="sk-***",
    base_url="https://yibuapi.com/",
)

response = client.images.generate(
    model="imagen-3.0-generate-002",
    prompt="a portrait of a sheepadoodle wearing a cape",
    response_format="b64_json",
    n=1,
)

for item in response.data:
    img = Image.open(BytesIO(base64.b64decode(item.b64_json)))
    img.show()
使用指南

# 🚀 四大AI巨头巅峰对决:GPT-5 vs Claude 4.5 vs Gemini 2.5 Pro vs DeepSeek V3.1

2025-10-20 9:56:27

使用指南

Gemini 3.0 要掀桌子了?它到底能不能影响到 GPT 和 Claude——以及**

2025-10-23 10:23:13

搜索