A founder's desk tells you more than a deck ever will. I spent Thursday at theirs — here is what I saw, and what it changed.
The office is a floor-through on Van Brunt with one good window and four whiteboards. I got there at ten; Priya was already on a call with a design partner, pacing. The codebase was open on the monitor behind her, and I could see, over her shoulder, that someone had checked in a test suite that morning with 412 new cases. That's the kind of detail you only get by showing up.
ticket-rs is building an orchestration layer for coding agents — the thing that lives between a developer's editor and a fleet of LLM-driven workers, deciding which tickets get picked up, what context they see, and when to escalate to a human. We led the seed in October. This visit was my first since the round closed.
Three things had changed since I last walked out of that room. The headcount (six, up from three). The product (a CLI had become a CLI and a VS Code extension, and there was a rough web dashboard I hadn't seen before). And the shape of the conversation — Priya was no longer pitching me on what the thing could be; she was walking me through what customers were doing with it, which is always the more interesting conversation.
is the one I
came to see.
A run, live.
Priya ran me through a real ticket from one of their design partners — a mid-stage infra company that had given them a sandboxed copy of their monorepo. She pulled up a failing test, ran tr dispatch, and we watched the agent pick it up, draft a patch, run the suite locally, notice a flakey subsystem, and — this is the part I liked — stop and ask instead of guessing. The escalation landed in a Slack thread, was resolved by a human in under a minute, and the agent carried on.
What made this demo different from the six other agent demos I saw last month is that the interesting failure was designed in. The agent knew which tests to trust. It knew which subsystems had a history of flake and which didn't — a signal ticket-rs builds from repo history, not from the model. The patch itself was unremarkable. The decision not to ship it was the product.
The quiet part, out loud.
I asked Priya what she was most worried about. I expected the usual — hiring, runway, the shape of the Series A. Instead she said this:
"The model is the easy part. The adjudicator is the hard part. That's where we'll be spending the next year."
— Priya Shankar, CEO · ticket-rsThe adjudicator, in their architecture, is the piece that decides whether to ship a patch, escalate, or retry. It's not a model; it's a small rule system informed by a larger eval harness that scores past runs. It is, in the language of our last research note, a harness. And it matches almost exactly the thesis we wrote when we underwrote the round — which is a good feeling but also a dangerous one, so I pushed back for a half-hour on why the adjudicator couldn't be a prompt. She was patient and correct.
Three small revisions.
Updated priors · post-visit
None of these are huge revisions on their own. Together, they change how we plan to be useful to them over the next six months. I'm going to spend more time with their platform-team buyers — I owe them three intros by Friday — and less time worrying about a horizontal GTM motion that, it turns out, isn't where they want to be.
We stayed an extra hour past what I'd planned. I took the F home and wrote most of this on the train. The codebase has 412 more tests today than it did on Monday, which is the sort of detail you only get by showing up.