Describe your Geometry Dash level in plain text. Editor AI builds it in the editor — objects, layout, difficulty, and all.
Write a prompt. Pick an AI. Get a level. Tweak from there.
Platinum is a peer-to-peer GPU/CPU donation network. Community members run the worker client on their machines and contribute compute. Anyone can use it completely free — no API key, no account, no payment.
Editor AI is not yet on the Geode index. Install it manually from GitHub Releases.
geode.node-ids mod (auto-installed).
entity12208.edit-ai.geode.
entity12208.edit-ai.geode and select it.Each provider works differently. Start with Platinum or local Ollama for the best free experience.
ollama pull entity12208/editorai:qwen — more powerfulollama pull entity12208/editorai:deepseek — more creative
http://localhost:11434).ollama serveclaude-sonnet-4-6 (balanced, default) or claude-opus-4-6 (highest quality, slower).sk-ant-) into the Claude API Key field in mod settings. It's stored securely in Geode's save system.ministral-3b-latest (fastest), ministral-8b-latest (default, balanced), mistral-small-latest, mistral-medium-latest, or mistral-large-latest (most capable).meta-llama/Llama-3.1-8B-Instruct works well. You can use any model that supports the Inference API chat endpoint.hf_) into the HuggingFace API Key field in mod settings.gpt-4o (default, recommended) or gpt-4.1-mini (faster, cheaper).sk-... key into the OpenAI API Key field in mod settings.gemini-2.5-flash (default) or gemini-2.5-pro (more capable, may hit limits faster).google/gemini-2.5-flash, anthropic/claude-sonnet-4) in the OpenRouter Model field. Paste your key into the OpenRouter API Key field.deepseek-chat model is great for level generation, while deepseek-reasoner excels at complex layouts.
deepseek-chat (default, fast), deepseek-reasoner (slower but thorough), or deepseek-coder. Paste your key into the DeepSeek API Key field.http://localhost:1234.http://localhost:1234 should work if you didn't change it in LM Studio../llama-server -m your-model.gguf --port 8080http://localhost:8080 — change it if you used a different port.Platinum is a distributed computing network built on Ollama. People with spare GPUs donate compute; everyone uses it for free.
entity12208/editorai:qwen orentity12208/editorai:deepseek
ollama pull <model-name>git clone https://github.com/entity12208/EditorAI-Platinumpip install -r requirements.txtpython worker/client.pyworker/client.py. Poll the coordinator for jobs, process them with local Ollama, and return results.ollama-coordinator.onrender.com. Manages the worker registry, routes requests, and handles timeouts.ollama-proxy-sh88.onrender.com. This is the URL you put in Editor AI's Ollama Server URL setting.entity12208.edit-ai.geode from the GitHub Releases page, then use the Manual Installation button in the bottom-left of the Geode menu. The full 6-step guide is in the Install section above.
entity12208/editorai:qwen, entity12208/editorai:deepseek, or the other available options in the mod settings. Models are auto-detected from Platinum or your local Ollama server.
geode.node-ids dependency (auto-installed). Note: Editor AI is incompatible with the alk.editor-collab mod (Editor Collab).
python worker/client.py and your machine will automatically register and start accepting jobs.
Source code, community, and releases.