My favorite brain wrap arounder was the following.I had a CCC DDP-224 computer. It had a FORTRAN compiler.I wrote the OS for it. There was no memory management hardware, but there were interrupts and independent data channels. The interrupts were in very low memory, as was usual at the time.I revised the assembler and FORTRAN compiler to produce relocatable code by default, and revised the loader to load above 0ctal 100 to avoid the transfer vectors.When I compiled a particular program, involving 13 nested DO loops (a FFT program), the whole system crashed. I could see that the transfer vectors for the interrupts had been overwritten by the compiler at compilation time. But where in that 12,000-card program was the error?I wrote an interpreter in assembly language for a computer exactly like the 224, but with memory management. The interpreter took DDP-224 hardware instructions and simulated them in the interpreter. So if I loaded the compiler into the simulated machine (running on the real machine), it would trap any attempt to write on the transfer vectors. That all worked and I debugged the compiler in a few minutes.But when I went to test that simulator-interpreter-emulator, I ran the regular hardware test programs, and they all worked OK except the memory test program that failed miserably.What happened was that the interpreter kept the address counter in RAM (no place else to put it; the real address counter was running the interpreter). And the memory test program wrote all over the unused memory, and read it back to see if it compared. And it did not. In particular, the memory address where the program counter was changed.It took a while to understand that.When you get a failure with a thing like that, it is tough to have the correct mental model of what is going on. Is the real computer having a problem? Or is the simulated computer having a problem? Or is the simulator program having a problem?