There’s a version of marketing work that’s entirely about judgment. What to say. To whom. When. Why this message over that one. These are interesting problems. The kind that make the job worth doing.

And then there’s the rest of it.

The rest is pulling last week’s numbers from the ads dashboard. Checking whether the latest post performed. Writing a brief for an article that should have gone out three weeks ago. Adjusting a bid that didn’t require human thought — it required a decision that any reasonably informed system could make.

Most marketing teams spend most of their time on the rest.

The tool problem

We looked at what the market offered and found tools that made the “rest” faster without making it go away. Better dashboards. Smarter automations. Deeper reporting. All of it still required someone to open the software, read the output, and decide what to do.

The best available outcome was a slightly better version of the same workflow. The person was still in the loop the wrong way — doing the pulling, the checking, the briefing — instead of doing the deciding.

We wanted something different. Not a tool that waited for us. An agent that watched the accounts, found what mattered, prepared a recommendation, and put it in front of us ready to approve. One decision in, one action out.

The design constraint that made everything better

The hardest thing to get right wasn’t the AI. It was the loop.

Most AI-assisted software puts the human at the start: you prompt, it responds. We wanted to invert that. The agent runs continuously. It surfaces what it found. You decide whether it should act.

This required designing every action to exist in three states: the agent can watch and not act, the agent can prepare and wait for approval, or the agent can act within defined limits. Human-in-the-loop as the default. Autonomy as the opt-in.

The constraint made everything better. When you can’t assume the machine will just run, every recommendation has to be good enough that a person would actually approve it. That pressure is useful. It forces specificity. Vague suggestions don’t survive the approval queue.

Why now

The capability gap closed quietly. Models got good enough to reason about marketing performance, not just describe it. Integrations became reliable. The workflows that would have taken a team a year to build took months.

We’re a small team based in Sweden. We built the platform we always wanted to work with. Early access means we onboard teams by hand, watch how they use it, and adjust.

If you’re reading this, you’re early. We take that seriously.