Vibe coding

Full vibe coding with full control

Vibe coding sounds reckless: you let an AI drive a terminal, watching from above. AITerm makes it the opposite of reckless — every tool call is on screen, a pause button stops the conversation mid-flight, and the history of what your AI did is searchable, replayable, and yours. The black-box version of MCP is not the only version.

TL;DR. Vibe coding doesn't mean blind trust. With AITerm you can let Claude (or any MCP client) pilot your machines while watching every call live, pausing instantly, replaying history, and getting a vibrate-and-notify when your session goes quiet. MCP integration docs →

The problem with MCP as it usually feels

You connect Claude Desktop to your machines via MCP. The model thinks for a moment and runs something. You see the result of that something appear in chat. What ran? What did it touch? What got returned to the model? In most setups the answer is «you trust the chat output and move on». That works until it doesn't.

The shape of the issue isn't MCP itself — it's the lack of an honest UI on top of it. AITerm adds that UI.

Three things you actually see

Live MCP mirror

Every tool call live — Pause holds them, Re-send replays, history is searchable across machines.

Voice from your phone

Tap, speak, review the transcript before sending. Edit if the recogniser slipped. Web Speech API in the browser — no audio uploaded.

Idle alerts you wanted

Bell per session. When your AI goes quiet past your threshold the tab vibrates and notifies. Only while the tab is open — by design.

Why this isn't blind trust

Vibe coding done right

A concrete workflow

Walking through what a vibe-coding session actually looks like in AITerm:

  1. You install the connector on your dev box (one curl) and pair it from the dashboard.
  2. You start a Claude Code session in the dashboard, in the project directory you want to work on.
  3. From Claude Desktop on your laptop you ask Claude to refactor a function. Claude calls list_machines, then list_sessions, then send_to_session with submit_mode="paste". The Mirror Panel in your browser shows each call appearing live.
  4. Claude Code on your dev box starts editing. You watch from your phone via the dashboard.
  5. You step away. The session goes quiet for 90 seconds. Your phone vibrates and shows a notification: Session idle — 90s. You check, see Claude finished and is waiting on your sign-off.
  6. You hit the mic button, say «great, run the tests», edit one word in the transcript, send. Claude runs the tests in the same session.
  7. Tests pass. You commit from the same browser tab.

What stays out of scope on purpose

A few things AITerm explicitly doesn't do, because the trade-off isn't worth it:

If you're building MCP integrations

The full tool reference, conversation_id convention, first-use approval flow, and the submit_mode table live at /docs/mcp. Worth a read before you wire AITerm into an MCP-Server config — especially the bracketed-paste section, which catches a lot of integrations the first time around.

Try it

Install the connector on the machine you want to pilot. The dashboard does the rest.

curl -sL https://aiterm.io/install | bash

Connector is Linux-only today. macOS and FreeBSD are on the roadmap.


Create free account →
[2026-05-03] [v2.1.126] [linux] [200 OK] aiterm.io/vibe-coding