Vibe coding sounds reckless: you let an AI drive a terminal, watching from above. AITerm makes it the opposite of reckless — every tool call is on screen, a pause button stops the conversation mid-flight, and the history of what your AI did is searchable, replayable, and yours. The black-box version of MCP is not the only version.
TL;DR. Vibe coding doesn't mean blind trust. With AITerm you can let Claude (or any MCP client) pilot your machines while watching every call live, pausing instantly, replaying history, and getting a vibrate-and-notify when your session goes quiet. MCP integration docs →
The problem with MCP as it usually feels
You connect Claude Desktop to your machines via MCP. The model thinks for a moment and runs something. You see the result of that something appear in chat. What ran? What did it touch? What got returned to the model? In most setups the answer is «you trust the chat output and move on». That works until it doesn't.
The shape of the issue isn't MCP itself — it's the lack of an honest UI on top of it. AITerm adds that UI.
Three things you actually see
Live MCP mirror
HistoryLive
send_to_sessione7f8d9:demo02
14:27:18rollback prod
↺send_to_sessiona1b2c3:demo01
14:25:33git status
read_session_outputa1b2c3:demo01
14:23:13tail logs production
list_sessionsa1b2c3
14:19:333 session(s) on macbook-demo
Every tool call live — Pause holds them, Re-send replays, history is searchable across machines.
Voice from your phone
Recording…· en-US
show me the build logs from yesterday
0:08
Tap, speak, review the transcript before sending. Edit if the recogniser slipped. Web Speech API in the browser — no audio uploaded.
Idle alerts you wanted
Claude Codemacbook-demo:~/proj🔔
Ollamademo02:~/sandbox🕽
Notify when idle (Shift / double-click for threshold)
Threshold: 60s
🔔Claude Codeidle for 73smacbook-demo
Bell per session. When your AI goes quiet past your threshold the tab vibrates and notifies. Only while the tab is open — by design.
Why this isn't blind trust
Vibe coding done right
No hidden tool use. Every MCP call appears in the mirror, including read-only ones. list_machines, read_session_output, remote_grep — all of them.
Pause is instant. One click on Pause and the entire conversation stops. New calls queue. Resume drains the queue in FIFO order. There's no «but the AI was already running».
Audit log persists. Re-send any historical call from the panel. Conversation ended yesterday and you want to know exactly what was sent at 14:32? It's there. Filter by machine, session, tool.
First-use approval per conversation. A new Claude conversation can't pilot anything until you click Approve once on /profile/tokens. Same conversation, all approved. New conversation, fresh approval. The granularity is the conversation, not the token.
Token scope is per machine. Mint a token in /profile/tokens scoped to one machine if that's the right radius.
Connector is open source. The agent on your machine is MIT-licensed, signed, hash-verified per file. Read it before you install it.
A concrete workflow
Walking through what a vibe-coding session actually looks like in AITerm:
You install the connector on your dev box (one curl) and pair it from the dashboard.
You start a Claude Code session in the dashboard, in the project directory you want to work on.
From Claude Desktop on your laptop you ask Claude to refactor a function. Claude calls list_machines, then list_sessions, then send_to_session with submit_mode="paste". The Mirror Panel in your browser shows each call appearing live.
Claude Code on your dev box starts editing. You watch from your phone via the dashboard.
You step away. The session goes quiet for 90 seconds. Your phone vibrates and shows a notification: Session idle — 90s. You check, see Claude finished and is waiting on your sign-off.
You hit the mic button, say «great, run the tests», edit one word in the transcript, send. Claude runs the tests in the same session.
Tests pass. You commit from the same browser tab.
What stays out of scope on purpose
A few things AITerm explicitly doesn't do, because the trade-off isn't worth it:
No Web Push / Service Worker. Idle notifications only fire while a tab is open. The reason: Service Workers add a state machine that has to keep working across deploys, and Web Push requires a third-party endpoint. The user-experience win wasn't worth the operational debt.
No audio upload. Voice transcripts are produced in the browser via Web Speech API. The audio bytes never reach our servers. If your browser doesn't support it, you fall back to the keyboard input button next to the mic.
No automatic submit-mode for Claude-Code TUI by default. The hub knows about submit_mode="paste" for bracketed-paste TUIs but the default is still "enter". We document the pitfall on /docs/mcp rather than guess heuristics that might break other TUIs. (Heuristic-based defaulting is on the roadmap.)
If you're building MCP integrations
The full tool reference, conversation_id convention, first-use approval flow, and the submit_mode table live at /docs/mcp. Worth a read before you wire AITerm into an MCP-Server config — especially the bracketed-paste section, which catches a lot of integrations the first time around.
Try it
Install the connector on the machine you want to pilot. The dashboard does the rest.
curl -sL https://aiterm.io/install | bash
Connector is Linux-only today. macOS and FreeBSD are on the roadmap.