Skip to main content
Once the MCP server is connected, just talk to your AI normally. Here are example prompts that hit different tools — use them as inspiration.

Generate an image

“Generate a photo of a golden retriever puppy in a field of sunflowers using Flux Schnell.”
Your AI will:
  1. Call get_tool_detail with tool_slug: "create-image" to see model options.
  2. Call generate with tool: "create-image", input: { model: "flux-schnell", prompt: "..." }.
  3. Poll check_generation until done.
  4. Show you the image URL.

Let the AI pick the model

“Make me an image of a cyberpunk city. Use whichever model is cheapest.”
The AI will call get_tool_detail, compare cost across models, and pick the cheapest before calling generate.

Generate a video from an image

“Animate this image with a gentle breeze and the subject smiling. Make it 4 seconds.” [attaches image]
The AI will:
  1. Find animate-image via search_tools.
  2. Upload or reference the image (client-dependent).
  3. Call generate with tool: "animate-image", input: { model: "veo3-1-lite-image-to-video", image_url: "...", prompt: "...", duration: 4 }.
  4. Poll every ~10 seconds — videos take 60–180 seconds.
For videos, remind the AI the job is async so it doesn’t ask every second. Claude Code and Cursor handle this automatically via poll_after_seconds.

Text to speech

“Read this paragraph aloud in a British female voice. Use ElevenLabs.”
AI calls text-to-speech tool with the chosen model. Audio is a short job (~10s) — polling finishes quickly.

Check your credits before generating

“How many credits do I have? Then generate 4 variations of ‘neon lights at night’ with Nano Banana.”
AI calls get_account first, tells you the balance, then runs generate four times. Bonus: it will warn you if the total cost exceeds your balance.

List recent work

“What have I generated in the last hour?”
AI calls list_generations with limit: 20 and shows you the results, filtering by time in its response.

Chain generations

“Generate an image of a dragon, then animate it into a 4-second video.”
AI runs two generate + check_generation cycles, feeding the image URL from the first into the second.

Prompts that won’t work well

  • “Just generate it, don’t wait” — The async pattern doesn’t have a “fire and forget” mode for MCP. Your AI will still poll.
  • “Make 100 images in parallel” — MCP clients typically serialize tool calls. Use the REST API for bulk jobs.
  • “Edit my Photoshop file” — MCP tools return URLs; they don’t touch local files unless your AI client has file-write capability.

Tips for better results

  1. Name the model when you care: “Use Nano Banana Pro” avoids a get_tool_detail round-trip and ambiguity.
  2. Be specific about cost limits: “Keep it under 50 credits” lets the AI filter models.
  3. Tell it to wait: “This might take 2 minutes, be patient” — helps with clients that otherwise time out waiting for a tool response. (The server no longer times out; but some clients have their own UX timers.)

Next steps

Tool reference

Full details on every tool.

Browse models

See all models available via MCP too.