Generate GD levels
with AI

Describe your Geometry Dash level in plain text. Editor AI builds it in the editor — objects, layout, difficulty, and all.

Runs on Windows macOS Android iOS
Active Platinum Workers
Requests Queued
Object Types in Library
10
AI Providers Supported

Everything you need to build faster

Write a prompt. Pick an AI. Get a level. Tweak from there.

🤖
10 AI Providers
Supports Ollama (local & Platinum), Claude, OpenAI, OpenRouter, Mistral AI, HuggingFace, DeepSeek, Gemini, LM Studio, and llama.cpp. Per-provider model selection and API keys live in mod settings — no popup.
15+ Advanced Features
Enable advanced features for dynamic levels: Color Triggers, Move Triggers, Rotate Triggers, Alpha Triggers, Toggle Triggers, Pulse Triggers, Spawn/Stop Triggers, Speed Portals, Player Visibility, Trail Triggers, plus group IDs, color channels, and multi-activate.
📦
Full Object Library
The entire GD object library is sent to the AI — every block, spike, portal, orb, and decoration. Auto-updates from GitHub on launch.
Progressive Spawning
Objects are placed in batches after generation, so the game stays smooth even for huge levels. Spawn speed is a configurable setting.
🎮
Native Editor Integration
An AI button sits in the editor's top-right. Enter a description, optionally clear the level first, and press Generate. Nothing else required.
🎨
Difficulty & Style Controls
Set difficulty (Easy → Extreme), visual style (Modern, Retro, Minimalist, Decorated), and length (Short → XXL) before generating.
✦ EditorAI Platinum — Free Forever

Platinum is a peer-to-peer GPU/CPU donation network. Community members run the worker client on their machines and contribute compute. Anyone can use it completely free — no API key, no account, no payment.

How it works
Donors run the worker client on their GPU or CPU
The coordinator routes generation requests to available workers
The public proxy delivers results — free, for everyone
Recommended Providers
🏆 Platinum FREE
💻 Local Ollama FREE
🖥️ LM Studio FREE
llama.cpp FREE
🥈 Claude close second
🧠 DeepSeek affordable, high quality
⚠️ Gemini not recommended — tight limits

Manual install in 6 steps

Editor AI is not yet on the Geode index. Install it manually from GitHub Releases.

Prerequisites: You need Geometry Dash 2.2081 and Geode 5.6.1+ already installed. Editor AI requires the geode.node-ids mod (auto-installed).
1
Download the latest release
Go to the GitHub Releases page and download the file named entity12208.edit-ai.geode.
Download .geode →
2
Open Geometry Dash with Geode
Launch the game. You should see the Geode icon on the main menu — this confirms Geode is running.
3
Open the Geode menu
Click the Geode icon to enter the Geode mod browser.
4
Press Manual Installation
Find the Manual Installation button in the bottom-left of the Geode menu and press it.
5
Select the .geode file
A file picker will open. Navigate to wherever you saved entity12208.edit-ai.geode and select it.
6
Restart the game
Geode will prompt you to restart. Do so. Editor AI is now installed — look for the AI button in the top-right of any level editor.

Choose your AI provider

Each provider works differently. Start with Platinum or local Ollama for the best free experience.

No API key needed. EditorAI Platinum is 100% free — your requests run on community-donated GPU and CPU hardware. No account, no payment.
1
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
2
Set AI Provider → ollama
Platinum speaks the Ollama protocol — select ollama as your provider.
3
Enable Use Platinum
Toggle Use Platinum on. This automatically routes your requests to the community proxy — no URL to type.
4
Choose an Ollama model
Select a model from the dropdown
5
Generate!
Open any level editor, press the AI button, enter a prompt, and press Generate. No key needed.
Note: Platinum will only function if someone is donating. To check, look at the top of this webpage, or go to https://ollama-coordinator.onrender.com/api/status.
Fully private and free. Runs entirely on your own machine. Requires a GPU with 6+ GB VRAM for good speed, but works on CPU too (slower). No internet required after setup.
1
Install Ollama
Download and install Ollama from ollama.com.
2
Pull the EditorAI model
Run one of these in a terminal (each ~5.2 GB):
ollama pull entity12208/editorai:qwen — more powerful
ollama pull entity12208/editorai:deepseek — more creative
3
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
4
Configure the provider
Set AI Provider → ollama.
Make sure Use Platinum is off (it defaults to local http://localhost:11434).
Select your model from the Ollama Model dropdown.
5
Make sure Ollama is running
If Ollama isn't already running as a background service, start it with: ollama serve
6
Generate!
Open the editor, press the AI button, enter a prompt, and press Generate.
Paid plan required. Anthropic Claude API access requires a funded account. Claude produces very high-quality results and is our recommended cloud option.
1
Get a Claude API key
Visit console.anthropic.com, create an account, add credits, and generate an API key.
2
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI to open mod settings.
3
Set AI Provider → claude
Select claude as the AI provider, then choose a model. Available: claude-sonnet-4-6 (balanced, default) or claude-opus-4-6 (highest quality, slower).
4
Enter your API key in settings
Paste your key (starting with sk-ant-) into the Claude API Key field in mod settings. It's stored securely in Geode's save system.
Paid plan required. Mistral AI (Ministral) requires a funded account at La Plateforme. Ministral models are fast, cost-effective, and produce solid results.
1
Get a Mistral API key
Visit console.mistral.ai, create an account, add credits, and generate an API key.
2
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
3
Set AI Provider → ministral
Select ministral as the AI provider. Available models: ministral-3b-latest (fastest), ministral-8b-latest (default, balanced), mistral-small-latest, mistral-medium-latest, or mistral-large-latest (most capable).
4
Enter your API key in settings
Paste your Mistral key into the Mistral API Key field in mod settings.
Free tier available. HuggingFace Inference API has a free tier for many models. Results vary by model — some open-source models work surprisingly well for level generation.
1
Get a HuggingFace token
Visit huggingface.co/settings/tokens, create a free account, and generate a User Access Token.
2
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
3
Set AI Provider → huggingface
Select huggingface and enter a model ID in the model field. The default meta-llama/Llama-3.1-8B-Instruct works well. You can use any model that supports the Inference API chat endpoint.
4
Enter your token in settings
Paste your HuggingFace token (starting with hf_) into the HuggingFace API Key field in mod settings.
Paid plan required. OpenAI API access requires a funded account.
1
Get an OpenAI API key
Visit platform.openai.com/api-keys, add credits, and create an API key.
2
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
3
Set AI Provider → openai
Select openai as the AI provider. Available: gpt-4o (default, recommended) or gpt-4.1-mini (faster, cheaper).
4
Enter your API key in settings
Paste your sk-... key into the OpenAI API Key field in mod settings.
Not recommended. Gemini has very tight rate limits and tends to produce lower-quality level layouts compared to other providers. We strongly recommend using Platinum, local Ollama, or Claude instead.
If you still want to use Gemini: It does have a free tier. Be aware you will likely hit quota limits quickly.
1
Get a Gemini API key
Visit aistudio.google.com/api-keys and create a free key.
2
Set AI Provider to gemini
In Editor AI settings, set AI Provider → gemini. Available: gemini-2.5-flash (default) or gemini-2.5-pro (more capable, may hit limits faster).
3
Enter your API key in settings
Paste your Gemini key into the Gemini API Key field in mod settings.
One key, 300+ models. OpenRouter provides a unified API for many hosted models including GPT-4, Claude, Llama, and more. Great for accessing models without separate accounts.
1
Get an OpenRouter API key
Visit openrouter.ai/keys, create an account, and generate an API key.
2
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
3
Set AI Provider → openrouter
Select openrouter as the AI provider.
4
Enter model and API key
Enter any OpenRouter model ID (e.g., google/gemini-2.5-flash, anthropic/claude-sonnet-4) in the OpenRouter Model field. Paste your key into the OpenRouter API Key field.
Affordable high-quality AI. DeepSeek offers powerful models at competitive prices. Their deepseek-chat model is great for level generation, while deepseek-reasoner excels at complex layouts.
1
Get a DeepSeek API key
Visit platform.deepseek.com, create an account, and generate an API key.
2
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
3
Set AI Provider → deepseek
Select deepseek as the AI provider.
4
Choose model and enter API key
Select a model: deepseek-chat (default, fast), deepseek-reasoner (slower but thorough), or deepseek-coder. Paste your key into the DeepSeek API Key field.
Fully local GUI option. LM Studio provides an easy-to-use desktop app for running local LLMs with a built-in server. No command line required.
1
Install LM Studio
Download and install LM Studio for your platform (Windows, macOS, or Linux).
2
Download and load a model
In LM Studio, search for a model (e.g., Qwen2.5, DeepSeek, or Llama) and download it. Load the model by clicking the chat button.
3
Start the local server
Click the Server tab in LM Studio and click Start Server. The default URL is http://localhost:1234.
4
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
5
Set AI Provider → lm-studio
Select lm-studio as the AI provider. The default URL http://localhost:1234 should work if you didn't change it in LM Studio.
6
Generate!
Open the editor, press the AI button, enter a prompt, and press Generate. Keep LM Studio running in the background.
Lightweight local option. llama.cpp is a fast, lightweight C++ implementation for running GGUF models locally. Great for older hardware or when you want minimal resource usage.
1
Install llama.cpp
Download llama.cpp from GitHub or install via your package manager. Build or download a pre-built binary for your platform.
2
Download a GGUF model
Download a GGUF format model from HuggingFace (e.g., Qwen2.5, Llama-3, or DeepSeek in GGUF format). Smaller models (3B-8B) work best for speed.
3
Start llama-server
Run the llama-server with your model:
./llama-server -m your-model.gguf --port 8080
4
Open Editor AI Settings
In the Geode menu, tap the gear icon next to Editor AI.
5
Set AI Provider → llama-cpp
Select llama-cpp as the AI provider. The default URL is http://localhost:8080 — change it if you used a different port.
6
Generate!
Open the editor, press the AI button, enter a prompt, and press Generate. Keep llama-server running in the background.

Free AI, powered by the community

Platinum is a distributed computing network built on Ollama. People with spare GPUs donate compute; everyone uses it for free.

Special thanks to VLTGG for letting me use their servers!
Checking network…
Contacting coordinator
Active Workers
Queued
Processing
🎮 I want to use Platinum
Generate levels for free using the community network. No account or payment needed.
1
In Editor AI settings, set AI Provider → ollama
2
Enable the Use Platinum toggle — this automatically points to the community proxy
3
Select a model from the Ollama model dropdown:
entity12208/editorai:qwen or
entity12208/editorai:deepseek
4
Open the editor, press AI, enter a prompt. No key needed.
💻 I want to donate compute
Share your GPU or CPU with the network. Run the worker in the background to help others generate for free.
1
Install Ollama and Python 3.9+
2
Pull any model(s) you want to serve:
ollama pull <model-name>
3
Clone the Platinum repo:
git clone https://github.com/entity12208/EditorAI-Platinum
4
Install dependencies:
pip install -r requirements.txt
5
Start the worker:
python worker/client.py
It registers automatically and starts accepting jobs.
View Platinum on GitHub →
⚙️ Network Architecture
💻
Worker
Donor machines running worker/client.py. Poll the coordinator for jobs, process them with local Ollama, and return results.
🎛️
Coordinator
The brain at ollama-coordinator.onrender.com. Manages the worker registry, routes requests, and handles timeouts.
🌐
Proxy
Public Ollama-compatible endpoint at ollama-proxy-sh88.onrender.com. This is the URL you put in Editor AI's Ollama Server URL setting.

Common questions

Is Editor AI free?
Yes, with options. Four free options: EditorAI Platinum (community network), local Ollama, LM Studio, and llama.cpp are completely free. Paid cloud providers: DeepSeek, Claude, OpenRouter, ChatGPT, and Mistral AI require API keys with funded accounts. HuggingFace has a free tier with varying quality. Gemini has a free tier but we do not recommend it due to tight limits and lower quality results.
Which AI provider should I use?
Free options: Start with Platinum for zero-setup access, local Ollama if you have a GPU with 6+ GB VRAM, LM Studio for a user-friendly GUI, or llama.cpp for lightweight local inference on older hardware.

Cloud options: DeepSeek offers excellent quality at affordable prices. Claude offers the highest quality. OpenRouter gives access to 300+ models with one API key. Mistral AI (Ministral) is cost-effective. HuggingFace has a free tier. ChatGPT works well but is pricier. We do not recommend Gemini due to tight rate limits.
Why does the first Platinum request take so long?
The coordinator and proxy are hosted on Render's free tier, which spins them down after inactivity. The first request wakes them up — this can take 15–60 seconds. Subsequent requests in the same session are much faster. If this is a problem, use local Ollama instead.
The mod isn't on the Geode index — do I have to install manually?
Yes, for now. Download entity12208.edit-ai.geode from the GitHub Releases page, then use the Manual Installation button in the bottom-left of the Geode menu. The full 6-step guide is in the Install section above.
How do I select the Ollama model?
As of v2.1.9, the Ollama model is a dropdown — no typing required. Choose from entity12208/editorai:qwen, entity12208/editorai:deepseek, or the other available options in the mod settings. Models are auto-detected from Platinum or your local Ollama server.
Where do I enter my API key?
As of v2.1.9, API keys are entered directly in the mod settings — open the Geode menu, tap the gear icon next to Editor AI, and you'll find a dedicated API key field for each provider. The old lock-icon popup has been removed.
My objects are generating underground. How do I fix this?
This has been fixed in v2.1.9. The fix clamps all Y coordinates to a minimum of 0 before placing objects, shifts entire levels up if needed, and the AI system prompt explicitly instructs the AI never to use negative Y values.
How many objects can be generated at once?
The default cap is 500, configurable from 10 up to 1,000,000 in settings. The spawn speed (objects placed per tick) is also configurable. Very high object counts will take time to spawn and may cause brief lag; use the settings to tune the balance for your device.
What GD version does Editor AI support?
Geometry Dash 2.2081 is required. The mod runs on Windows, macOS, Android, and iOS. You also need Geode 5.6.1+ and the geode.node-ids dependency (auto-installed). Note: Editor AI is incompatible with the alk.editor-collab mod (Editor Collab).
Can I donate my GPU to the Platinum network?
Yes! See the Donate compute card in the Platinum section. You need Ollama, Python 3.9+, and the worker script from the EditorAI-Platinum repository. Run python worker/client.py and your machine will automatically register and start accepting jobs.
What are Advanced Features? How do triggers work?
Enable Advanced Features in mod settings to unlock 15+ dynamic capabilities. Triggers: Color (smooth color transitions), Move (animated platforms), Rotate (spinning groups), Alpha (fade in/out), Toggle (show/hide), Pulse (color flash), Spawn (chain triggers), Stop (cancel animations). Portals: Speed portals (0.5x to 4x). Visibility: Show/hide player icon and ghost trail. Object features: Group ID assignment (for trigger targeting), Color channels (coordinated color changes), Multi-activate (orbs/triggers fire every touch). These work best with smarter models like Claude, OpenRouter, or the custom Ollama models.
Where can I get support or suggest features?
Join the Editor AI Discord for support, ideas, and community. You can also open issues on GitHub.