The video actually convinced me that this might be an interesting tool. I'm going to try it myself for a small one-shot project and see how well it performs.
TUI-based reviews on it's own are already interesting. I had never considered it, I guess.
Is there a good way of adding in your own rules to the review? I’m always in the market for better review tools but I also need to check against internal coding stands and expectations,
the irony of multi-agent code review is that the people who would use it are already the ones who care about code quality. the real problem is everyone else just hitting accept on whatever claude spits out without even reading the diff. tooling for review keeps getting better while the average review effort keeps going down.
Great project! I’ve build something similar, not very clean and polished, but focussed around deterministic orchestration of multiple agents via typescript, because a coordinating agent was notoriously bad at things such as fetching relevant tickets and other context. One thing I struggle with so far, though, are the actual instructions for the review themselves. They are either too vague, leading to superficial or overly broad reviews, or too specific and thus not applicable to different kinds of PRs…
14 comments:
The best code review improvement I have done in my workflow with Claude is using tuicr (https://tuicr.dev).
It runs locally, YOU review all the code locally, and feedback that to Claude.
Agents reviewing AI code always felt dirty to me, especially when working on production (non-disposable) code.
The video actually convinced me that this might be an interesting tool. I'm going to try it myself for a small one-shot project and see how well it performs.
TUI-based reviews on it's own are already interesting. I had never considered it, I guess.
Is there a good way of adding in your own rules to the review? I’m always in the market for better review tools but I also need to check against internal coding stands and expectations,
the irony of multi-agent code review is that the people who would use it are already the ones who care about code quality. the real problem is everyone else just hitting accept on whatever claude spits out without even reading the diff. tooling for review keeps getting better while the average review effort keeps going down.
> Runs against your regular Claude Code subscription (Max plan recommended) — unlike /ultrareview, which charges against your Extra Usage pool.
How expensive is it to run in your experience? In $ or tokens?
Great project! I’ve build something similar, not very clean and polished, but focussed around deterministic orchestration of multiple agents via typescript, because a coordinating agent was notoriously bad at things such as fetching relevant tickets and other context. One thing I struggle with so far, though, are the actual instructions for the review themselves. They are either too vague, leading to superficial or overly broad reviews, or too specific and thus not applicable to different kinds of PRs…
We seem to be fighting complexity with complexity. Does it really help?
"I pay Claude, to use Claude, to write instructions for Claude, to review code from Claude"
Have we all just given up?
You forgot “to use Claude to write a HN post to promote..”
s/Claude/Intern/g
Holy vibe coding batman this looks like a repository with just a bazillion prompts of which there are already a million.
Seems like it would create a lot of friction and burn a lot of tokens.
That's looks like a fair bit of ceremony for what it does. Is this representative of the output? https://github.com/adamjgmiller/adamsreview/pull/3
[flagged]
[dead]