> In short: the implementation was performed in a very similar way to how a human programmer would do it, and not outputting a complete implementation from scratch “uncompressing” it from the weights.
> Instead, different classes of instructions were implemented incrementally, and there were bugs that were fixed…
Not sure the author fully grasps how and why LLM agents work this way. There’s a leap of logic here: the agent runs in a loop where command outputs get fed back as context for further token generation, which is what produces the incremental human like process he’s observing. It’s still that “decompression” from the weights, still the LLM’s unique way of extracting and blending patterns from training data, that’s doing the actual work. The agentic scaffolding just lets it happen in many small steps against real feedback instead of all at once. So the novel output is real, but he’s crediting the wrong thing for it.
There isn't any attempt to falsify the "clean room" claim in the article - a rational approach would be to not provide any documents about the Z80 and the Spectrum, and just ask it to one-shot an emulator and compare the outputs...
If the one-shot output resembles anything working (and I am betting it will), then obviously this isn't clean room at all.
Author just trusts the agent to not use the internet b/c he wrote it so in the instructions should tell you all you need to know. It's great he managed to prompt it w/ the right specification for writing yet another emulator but I don't think he understands how LLMs actually work so most of the commentary on what's going on with the "psychology" of the LLM should be ignored.
In the last paragraph you handwave that all the Z80 and ZX Spectrum documentations is likely already in the model anyway... Choosing to not provide the documents/websites might then requiring more prompting to finish the emulator, but the knowledge is there. You can't clean room with a large LLM. That's delusion!
But I see that my cpm-dist repository is referenced in the download script so that made me happy!
It's great to see people still using CP/M, writing software for it, and sharing the knowledge. Though I do think the choice to implement the CCP in C, rather than using a genuine one, is an interesting one, and a bit of a cheat. It means that you cannot use "SUBMIT" and other common-place binaries/utilities.
So what you're saying is that it's not just the machine-readable documentation built over decades of the officially undocumented behavior of Z80 opcodes—often provided under restrictive licenses—it's also the "known techniques and patterns" of emulator code—often provided under restrictive licenses.
Great project and write-up. I wonder whether most of those "hints" are really needed, though, as you are already using Claude CODE. Aren't things like "simple" and "clean" assumed to be part of its system prompt already (idnividual documentation style etc can't be, of course). While they were useful when using a general LLM for coding, I would think that they are now part of the overall setup of any coding agent. These days I run more into problems with language and api version drifts, even when specified beforehand.
All the design hints required for this or any other type of agentic "set it and forget it" development are interesting to me, because they enable the result but also lock in less-than-desirable results that exhibit a miss "like simulating a 2Mhz clock".
What if Agents were hip enough to recognize that they have navigated into a specialized area and need additional hinting? "I'm set up for CP/M development, but what I really need now is Z80 memory management technique. Let me swap my tool head for the low-level Z80 unit..."
We can throw RAGs on the pile and hope the context window includes the relevant tokens, but what if there were pointers instead?
I asked Gemini to reproduce the poem "The Road Not Taken". I got it in full (as far as I can tell without Gemini fetching anything from the web). I didn't provide any verse of the poem so I guess that counts as a clean room "implementation"?
The problem is that it will have been trained on multiple open source spectrum emulators. Even "don't access the internet" isn't going to help much if it can parrot someone else's emulator verbatim just from training.
Maybe a more sensible challenge would be to describe a system that hasn't previously been emulated before (or had an emulator source released publicly as far as you can tell from the internet) and then try it.
For fun, try using obscure CPUs giving it the same level of specification as you needed for this, or even try an imagined Z80-like but swapping the order of the bits in the encodings and different orderings for the ALU instructions and see how it manages it.
I tried creating an emulator for CPU that is very well known but lacks working open source emulators.
Claude, Codex and Gemini were very good at starting something that looked great but all failed to reach a working product. They all ended up in a loop where fixing one issues caused something else to break and could never get out of it.
When they get stuck, I find adding debug that the model can access helps. + Sometimes you need to add something into the prompt to tell it to avoid some approach at a point.
Interesting. When I had Claude write a language transpiler it always checked that tests passed before declaring a feature ready for PR. There was never a case where it gave up on achieving that goal.
Please tell me what CPU it is. I would give it a try. I doubt strongly a very well documented CPU can't be emulated by writing the code with modern AIs.
Even on a specific STM microcontroller (STM32G031), the LLM tools invent non-existent registers and then apologize when I point it out. And conversely, they write code for an entire algorithm (CRC, for example) when hardware support already exists on the chip.
Think of "What opcode has the value 0x3c on the Intel 8048" as a PNG image but the LLM like a very compressed JPEG. It will only get a very approximate answer. But you can give it explicit tools to look up things.
I thought this part of the write-up was interesting:
"This is, I think, in contradiction with the idea that LLMs are memorizing the whole training set and uncompress what they have seen. LLMs can memorize certain over-represented documents and code, but while they can extract such verbatim parts of the code if prompted to do so, they don’t have a copy of everything they saw during the training set, nor they spontaneously emit copies of already seen code, in their normal operation."
Can't things basically get baked into the weights when trained on enough iterations, and isn't this the basis for a lot of plagiarism issues we saw with regards to code and literature? It seems like this is maybe downplaying the unattributed use of open source code when training these models.
If you did that, comments would be "it's just a bit shuffle of the encodings, of course it can manage that, but how about we do totally random encodings..."
That's true, but I still think it'd be an interesting experiment to see how much it actually follows the specification vs how much it hallucinates by plagiarising from existing code.
Probably bonus points for telling it that you're emulating the well known ZX Spectrum and then describe something entire different and see whether it just treats that name as an arbitrary label, or whether it significantly influences its code generation.
But you're right of course, instruction decoding is a relatively small portion of a CPU that the differences would be quite limited if all the other details remained the same. That's why a completely hypothetical system is better.
Is it possible to build a full OS emulator on top of MMIX?
> The above tools could theoretically be used to compile, build, and bootstrap an entire FreeBSD, Linux, or other similar operating system kernel onto MMIX hardware, were such hardware to exist.
> I believe automatic programming to be already super-human, not in the sense it is currently capable of producing code that humans can’t produce, but in the concurrent usage of different programming languages, system programming techniques, DSP stuff, operating system tricks, math, and everything needed to reach the result in the most immediate way.
As HN likes to say, only a amateur vibe-coder could believe this.
Which is wrong. It's x4-x0. Comment does not match the code below.
> static inline uint16_t zx_pixel_addr(int y, int col) {
It computes a pixel address with 0x4000 added to it only to always subtract 0x4000 from it later. The ZX apparently has ROM at 0x0000..0x3fff necessitating the shift in general but not in this case in particular.
This and the other inline function next to it for attributes are only ever used once.
> During the
> * 192 display scanlines, the ULA fetches screen data for 128 T-states per
> * line.
Yep.. but..
> Instead of a 69,888-byte lookup table
How does that follow? The description completely forgets to mention that it's 192 scan lines + 64+56 border lines * 224 T-States.
I'm bored. This is a pretty muddy implementation. It reminds me of the way children play with Duplo blocks.
What happened with the wrong pixel layout is that the specification was wrong (the problem is that sub agents spawned recently by Claude Code are Haiuku session, their weakest model -- you can see the broken specification under spectrum-specs), it entered the code, caused a bug that Claude later fixed, without updating the comment. This actually somewhat shows that even under adversarial documentation it can fix the problem.
IMHO zx_pixel_addr() is not bad, makes sense in this case. I'm a lot more unhappy with the actual implementation of the screen -> RGB conversion that uses such function, which is not as fast as it could be. For instance my own zx2040 emulator video RAM to ST77xx display conversion (written by hand, also on GitHub) is more optimized in this case. But the fact to provide the absolute address in the video memory is ok, instead of the offset. Just design.
> This and the other inline function next to it for attributes are only ever used once.
I agree with that but honestly 90% of the developers work in this way. And LLMs have such style for this reason. I stile I dislike as well...
About the lookup table, the code that it uses in the end was a hint I provided to it, in zx_contend_delay(). The old code was correct but extremely memory wasteful (there are emulators really taking this path of the huge lookup table, maybe to avoid the division for maximum speed), and there was the full comment about the T-states, but after the code was changed this half-comment is bad and totally useless indeed. In the Spectrum emulator I provided a few hints. In the Z80, no hint at all.
If you check the code in general, the Z80 implementation for instance, it is solid work on average. Normally after using automatic programming in this way, I would ask the agent (and likely Codex as well) to check that the comments match the documentation. Here, since it is an experiment, I did zero refinements, to show what is the actual raw output you get. And it is not bad, I believe.
P.S. I see your comment greyed out, I didn't downvote you.
Even though I understand your sentiment, and think it is sincere, I think this is intellectually dishonest. Even though I have been programming since I was 16 (20 years), I still program like a child playing with Duplo blocks, when using a novel or otherwise unfamiliar technology. I bet that you do too. I also think that every programmer should play with their computers once in a while. Explore. Discover. Even if it means allowing yourself to be alienated from your means of production.
I disagree that this is "aggressive." It's certainly opinionated. I think the AI does a bad job here and I'm attempting to express that in a humorous and qualified way.
> WTF?
You don't consider this to be "aggressive?"
> stick to the rules when posting here
Do you genuinely think I'm trying to be disruptive?
The Z80 itself was "inspired" by the 8080, notably having dual 8080 register sets. It might be regarded as a "clear" (sic) room reimplemention/enhancement of the 8080 given that it was the same 8080 designers who left Intel to found Zilog and create the Z80.
What is "clear room"? If he means clean room, no, this doesn't qualify.
I wish people would stop using this phrase altogether for LLM-assisted coding. It has a specific legal and cultural meaning, and the giant amount of proprietary IP that has been (illegally?) fed to the model during training completely disqualifies any LLM output from claiming this status.
At first I thought it was brain slip in the HN title, then I saw TFA also said "clear", so thought it was perhaps a sarcastic jab at the original "clean" room story it is commenting on, but maybe in the end just an error ?
It would also be interesting to see how well the best open weights models such as Kimi K2.5 can do on a task like this with the same prompting to first gather specs, etc, etc.
In fact this would make for an interesting benchmark - writing entire non-trivial apps based on the same prompt. Each model might be expected to write and use it's own test cases, but then all could be judged based on a common set of test cases provided as part of the benchmark suite.
55 comments:
> In short: the implementation was performed in a very similar way to how a human programmer would do it, and not outputting a complete implementation from scratch “uncompressing” it from the weights.
> Instead, different classes of instructions were implemented incrementally, and there were bugs that were fixed…
Not sure the author fully grasps how and why LLM agents work this way. There’s a leap of logic here: the agent runs in a loop where command outputs get fed back as context for further token generation, which is what produces the incremental human like process he’s observing. It’s still that “decompression” from the weights, still the LLM’s unique way of extracting and blending patterns from training data, that’s doing the actual work. The agentic scaffolding just lets it happen in many small steps against real feedback instead of all at once. So the novel output is real, but he’s crediting the wrong thing for it.
There isn't any attempt to falsify the "clean room" claim in the article - a rational approach would be to not provide any documents about the Z80 and the Spectrum, and just ask it to one-shot an emulator and compare the outputs...
If the one-shot output resembles anything working (and I am betting it will), then obviously this isn't clean room at all.
Even without internet access, probably everything there is to say about Z80/Speccy emulators was already in its training set.
Author just trusts the agent to not use the internet b/c he wrote it so in the instructions should tell you all you need to know. It's great he managed to prompt it w/ the right specification for writing yet another emulator but I don't think he understands how LLMs actually work so most of the commentary on what's going on with the "psychology" of the LLM should be ignored.
You didn't read the full article. The past paragraph talks about this specifically.
In the last paragraph you handwave that all the Z80 and ZX Spectrum documentations is likely already in the model anyway... Choosing to not provide the documents/websites might then requiring more prompting to finish the emulator, but the knowledge is there. You can't clean room with a large LLM. That's delusion!
I grew up with the Spectrum, and wrote a CP/M emulator a while back. I'd be curious to see how complete it would get.
I struggled a lot with some complex software, which worked on some emulators and failed on others (and mine).
For example one bug I had, which is still outstanding, relates to the Hisoft C compiler:
https://github.com/skx/cpmulator/issues/250
But I see that my cpm-dist repository is referenced in the download script so that made me happy!
It's great to see people still using CP/M, writing software for it, and sharing the knowledge. Though I do think the choice to implement the CCP in C, rather than using a genuine one, is an interesting one, and a bit of a cheat. It means that you cannot use "SUBMIT" and other common-place binaries/utilities.
Thank you for your work about CP/M, Steve!
So what you're saying is that it's not just the machine-readable documentation built over decades of the officially undocumented behavior of Z80 opcodes—often provided under restrictive licenses—it's also the "known techniques and patterns" of emulator code—often provided under restrictive licenses.
Great project and write-up. I wonder whether most of those "hints" are really needed, though, as you are already using Claude CODE. Aren't things like "simple" and "clean" assumed to be part of its system prompt already (idnividual documentation style etc can't be, of course). While they were useful when using a general LLM for coding, I would think that they are now part of the overall setup of any coding agent. These days I run more into problems with language and api version drifts, even when specified beforehand.
All the design hints required for this or any other type of agentic "set it and forget it" development are interesting to me, because they enable the result but also lock in less-than-desirable results that exhibit a miss "like simulating a 2Mhz clock".
What if Agents were hip enough to recognize that they have navigated into a specialized area and need additional hinting? "I'm set up for CP/M development, but what I really need now is Z80 memory management technique. Let me swap my tool head for the low-level Z80 unit..."
We can throw RAGs on the pile and hope the context window includes the relevant tokens, but what if there were pointers instead?
I asked Gemini to reproduce the poem "The Road Not Taken". I got it in full (as far as I can tell without Gemini fetching anything from the web). I didn't provide any verse of the poem so I guess that counts as a clean room "implementation"?
No Carmack or Stallman. Just the right person at the right time.
The problem is that it will have been trained on multiple open source spectrum emulators. Even "don't access the internet" isn't going to help much if it can parrot someone else's emulator verbatim just from training.
Maybe a more sensible challenge would be to describe a system that hasn't previously been emulated before (or had an emulator source released publicly as far as you can tell from the internet) and then try it.
For fun, try using obscure CPUs giving it the same level of specification as you needed for this, or even try an imagined Z80-like but swapping the order of the bits in the encodings and different orderings for the ALU instructions and see how it manages it.
I think you are into something here.
I tried creating an emulator for CPU that is very well known but lacks working open source emulators.
Claude, Codex and Gemini were very good at starting something that looked great but all failed to reach a working product. They all ended up in a loop where fixing one issues caused something else to break and could never get out of it.
When they get stuck, I find adding debug that the model can access helps. + Sometimes you need to add something into the prompt to tell it to avoid some approach at a point.
Interesting. When I had Claude write a language transpiler it always checked that tests passed before declaring a feature ready for PR. There was never a case where it gave up on achieving that goal.
Please tell me what CPU it is. I would give it a try. I doubt strongly a very well documented CPU can't be emulated by writing the code with modern AIs.
> try using obscure CPUs
Better still invent a CPU instruction set, and get it to write an emulator for that instruction set in C.
Then invent a C-like HLL and get it to write a compiler from your HLL to your instruction set.
> try using obscure CPUs
I tried asking Gemini and ChatGPT, "What opcode has the value 0x3c on the Intel 8048?"
They were both wrong. The datasheet with the correct encodings is easily found online. And there are several correct open source emulators, eg MAME.
Even on a specific STM microcontroller (STM32G031), the LLM tools invent non-existent registers and then apologize when I point it out. And conversely, they write code for an entire algorithm (CRC, for example) when hardware support already exists on the chip.
Think of "What opcode has the value 0x3c on the Intel 8048" as a PNG image but the LLM like a very compressed JPEG. It will only get a very approximate answer. But you can give it explicit tools to look up things.
If the LLM doesn't have a websearch tool your test doesn't make any sense.
An LLM by itself is like a lossy image of all text in the internet.
Just some more parameters, and it would overfit that specific PDF too.
I thought this part of the write-up was interesting:
"This is, I think, in contradiction with the idea that LLMs are memorizing the whole training set and uncompress what they have seen. LLMs can memorize certain over-represented documents and code, but while they can extract such verbatim parts of the code if prompted to do so, they don’t have a copy of everything they saw during the training set, nor they spontaneously emit copies of already seen code, in their normal operation."
Can't things basically get baked into the weights when trained on enough iterations, and isn't this the basis for a lot of plagiarism issues we saw with regards to code and literature? It seems like this is maybe downplaying the unattributed use of open source code when training these models.
If you did that, comments would be "it's just a bit shuffle of the encodings, of course it can manage that, but how about we do totally random encodings..."
That's true, but I still think it'd be an interesting experiment to see how much it actually follows the specification vs how much it hallucinates by plagiarising from existing code.
Probably bonus points for telling it that you're emulating the well known ZX Spectrum and then describe something entire different and see whether it just treats that name as an arbitrary label, or whether it significantly influences its code generation.
But you're right of course, instruction decoding is a relatively small portion of a CPU that the differences would be quite limited if all the other details remained the same. That's why a completely hypothetical system is better.
Is it possible to build a full OS emulator on top of MMIX?
> The above tools could theoretically be used to compile, build, and bootstrap an entire FreeBSD, Linux, or other similar operating system kernel onto MMIX hardware, were such hardware to exist.
https://en.wikipedia.org/wiki/MMIX
Who else had ai implement an emulator? Raises hand. A 6502 emulator in JavaScript was my first Gemini experiment.
Anybody: can I test claude code without a whatng cartel web engine? web API using curl with an "public" token? Anything else?
I am itching at testing its ability to code assembly.
What'a a "clear room"? A clean room, but with plagiarized code, laundered through an LLM?
> I believe automatic programming to be already super-human, not in the sense it is currently capable of producing code that humans can’t produce, but in the concurrent usage of different programming languages, system programming techniques, DSP stuff, operating system tricks, math, and everything needed to reach the result in the most immediate way.
As HN likes to say, only a amateur vibe-coder could believe this.
It is really quite something how many people that have earned credibility designing well-loved tools seem to be true believers in the AI codswallop.
it's fascinating / astonishing
in spectrum.c
> Address bits for pixel (x, y): > * 010 Y7 Y6 Y2 Y1 Y0 | Y5 Y4 Y3 X7 X6 X5 X4 X3
Which is wrong. It's x4-x0. Comment does not match the code below.
> static inline uint16_t zx_pixel_addr(int y, int col) {
It computes a pixel address with 0x4000 added to it only to always subtract 0x4000 from it later. The ZX apparently has ROM at 0x0000..0x3fff necessitating the shift in general but not in this case in particular.
This and the other inline function next to it for attributes are only ever used once.
> During the > * 192 display scanlines, the ULA fetches screen data for 128 T-states per > * line.
Yep.. but..
> Instead of a 69,888-byte lookup table
How does that follow? The description completely forgets to mention that it's 192 scan lines + 64+56 border lines * 224 T-States.
I'm bored. This is a pretty muddy implementation. It reminds me of the way children play with Duplo blocks.
What happened with the wrong pixel layout is that the specification was wrong (the problem is that sub agents spawned recently by Claude Code are Haiuku session, their weakest model -- you can see the broken specification under spectrum-specs), it entered the code, caused a bug that Claude later fixed, without updating the comment. This actually somewhat shows that even under adversarial documentation it can fix the problem.
IMHO zx_pixel_addr() is not bad, makes sense in this case. I'm a lot more unhappy with the actual implementation of the screen -> RGB conversion that uses such function, which is not as fast as it could be. For instance my own zx2040 emulator video RAM to ST77xx display conversion (written by hand, also on GitHub) is more optimized in this case. But the fact to provide the absolute address in the video memory is ok, instead of the offset. Just design.
> This and the other inline function next to it for attributes are only ever used once.
I agree with that but honestly 90% of the developers work in this way. And LLMs have such style for this reason. I stile I dislike as well...
About the lookup table, the code that it uses in the end was a hint I provided to it, in zx_contend_delay(). The old code was correct but extremely memory wasteful (there are emulators really taking this path of the huge lookup table, maybe to avoid the division for maximum speed), and there was the full comment about the T-states, but after the code was changed this half-comment is bad and totally useless indeed. In the Spectrum emulator I provided a few hints. In the Z80, no hint at all.
If you check the code in general, the Z80 implementation for instance, it is solid work on average. Normally after using automatic programming in this way, I would ask the agent (and likely Codex as well) to check that the comments match the documentation. Here, since it is an experiment, I did zero refinements, to show what is the actual raw output you get. And it is not bad, I believe.
P.S. I see your comment greyed out, I didn't downvote you.
Even though I understand your sentiment, and think it is sincere, I think this is intellectually dishonest. Even though I have been programming since I was 16 (20 years), I still program like a child playing with Duplo blocks, when using a novel or otherwise unfamiliar technology. I bet that you do too. I also think that every programmer should play with their computers once in a while. Explore. Discover. Even if it means allowing yourself to be alienated from your means of production.
> It reminds me of the way children play with Duplo blocks.
WTF? I appreciate your technical expertise but you can't be aggressive like this on HN, and we've had to ask you this before: https://news.ycombinator.com/item?id=45663563.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
> you can't be aggressive
I disagree that this is "aggressive." It's certainly opinionated. I think the AI does a bad job here and I'm attempting to express that in a humorous and qualified way.
> WTF?
You don't consider this to be "aggressive?"
> stick to the rules when posting here
Do you genuinely think I'm trying to be disruptive?
I had Claude make an quad core 32 bits z80 just for fun.
<https://pastebin.com/Z2b82LHG>
Fascinating, but I'm not sure how these are consistent?
- Based on classic Z80 architecture by Zilog - Inspired by modern RISC designs (ARM, RISC-V, MIPS)
The Z80 itself was "inspired" by the 8080, notably having dual 8080 register sets. It might be regarded as a "clear" (sic) room reimplemention/enhancement of the 8080 given that it was the same 8080 designers who left Intel to found Zilog and create the Z80.
Z80 is CISC. This looks like a MIPS.
Funny enough, there is a 32-bit version of Z80 called Z380.
It is "clean room"
What is "clear room"? If he means clean room, no, this doesn't qualify.
I wish people would stop using this phrase altogether for LLM-assisted coding. It has a specific legal and cultural meaning, and the giant amount of proprietary IP that has been (illegally?) fed to the model during training completely disqualifies any LLM output from claiming this status.
You use clean room everywhere in the article and clear room in the title. Is this on purpose?
Literally nothing about it is either, either.
Yes for a moment I thought clear room might mean something else for LLMs.
Essentially they can't do clean room anything!
You might as well hire the entire former mid level of a businesses programming team and claim it's clean room work
Windows NT is not VMS! Trust me!
Had to Google this but I do love a deep cut reference!
https://www.itprotoday.com/server-virtualization/windows-nt-...
At first I thought it was brain slip in the HN title, then I saw TFA also said "clear", so thought it was perhaps a sarcastic jab at the original "clean" room story it is commenting on, but maybe in the end just an error ?
In any case, an interesting experiment.
It would also be interesting to see how well the best open weights models such as Kimi K2.5 can do on a task like this with the same prompting to first gather specs, etc, etc.
In fact this would make for an interesting benchmark - writing entire non-trivial apps based on the same prompt. Each model might be expected to write and use it's own test cases, but then all could be judged based on a common set of test cases provided as part of the benchmark suite.
How on earth does this count as "clean room" in any way, when many open-source Z80 emulators will without doubt have been part of its training data?
Perhaps why the title said "clear" room ?
Wow