1. What is Google AI Studio?

It’s a developer-focused prototyping tool that lets you try Gemini/API right in your browser—prompting, real-time interactions, and media generation—and then export the setup you like as a code snippet to plug into your app.

Put simply, it’s a web-based sandbox for using and experimenting with Google’s latest AI model, Gemini. With just a browser—no complex installs or heavy coding—anyone can jump in and explore: a “prompt-engineering playground.”

*Snippet: a small, ready-to-paste piece of code.
(Analogy) Instead of the entire recipe, think “a pinch of salt” — a specific, reusable part.

2. When did it launch?

December 13, 2023 (U.S. time). On that day, Google introduced it alongside the Gemini model, enabling developers to access the Gemini API via AI Studio or Vertex AI. (KST: early morning of December 14, 2023.)

*Vertex AI: Google Cloud’s production platform for ML and generative AI.
(Analogy) If AI Studio is the “playground” for trying models, Vertex AI is the “full-scale factory” for building, managing, and operating them in production.

3. Key features

  1. Text-based features
    1. Freeform Prompt: Chat freely with the AI to ask questions, brainstorm, and do creative writing.
    2. Structured Prompt: Assign the AI a role (e.g., English teacher, coding expert) to get goal-oriented outputs.
    3. Data Prompt: Provide tabular data and ask for analysis, summaries, or classification.
  2. Multimodal features
    1. Image-based tasks: Upload images to get descriptions, analyze contents, and ask questions about them.
    2. Generate media: In the web UI, try image generation/editing with Imagen and video generation with Veo, then export the same request as API code.
  3. Coding features
    1. Write code in specific languages, find and fix bugs, and ask for explanations.
      Note: AI Studio isn’t a full-blown professional editor for high-fidelity image/video creation. While Generate media exists, its purpose differs from dedicated tools like Midjourney or DALL·E.

4. What kinds of outputs does it produce?

  1. Text outputs
    1. Answers, article summaries, email drafts, story passages, poems, ad copy, code blocks—anything text-related.
  2. Model outputs
    1. Simple images and video previews aligned to your prompts.
    2. Executable code snippets (JS, Python, …)

5. Who typically uses it?

  • Developers & AI researchers: Quickly test Gemini’s capabilities and prototype ideas without heavy coding.

  • Content creators: Draft blog posts, briefs, and social copy; spark ideas faster.

  • AI beginners: Learn how to interact with AI and practice prompt engineering.

※ For production use, teams usually move to Vertex AI.

6. Is it English-only?

It’s optimized for English, but it supports many languages, including Korean. You can ask questions in Korean and get natural responses. That said, performance can sometimes lag behind English, and the very latest information may be better in English.
Supported languages vary by model and notes often say “limited to supported languages,” so in real projects it’s safest to check each model’s supported language list.

7. How is it priced?

  • Free for basic use
    You can use Google AI Studio at no cost for testing and light experiments.

  • API usage may incur costs
    If you connect what you build to a real service at scale via the Gemini API on Google Cloud Vertex AI, charges apply based on usage (model, token volume, media type).