I started working on a gameboy emulator a few months ago. I didn't finish the project, but I made some headway -- a disassembler, basic CPU setup, some instructions. It was slow going. I don't have much of a background in low-level programming, so I was constantly having to stop and chew through documentation. Each new layer of nuance added to my mental model of this 35-year-old computer. I sunk maybe twenty-ish hours into the project. I expect that finishing it would take another hundred to hundred-fifty hours.
For the most part, I enjoyed the toil. But sometimes I'd think about how the practices I was using -- for the purposes of arriving at a 'functional emulator' -- were entirely arbitrary. There are ways I could make this emulator-project take an afternoon, and there are ways I could make it take decades. It just comes down to the resources I'm willing to tap into it reach the finish line. I found this distressing. How much am I obligated to learn, intuit, understand, to call something my own?
It's not possible for me to be the first to build a gameboy emulator, anymore than it's possible for me to be the first to invent calculus. Of course, it's not unreasonable to say there's still ways I can 'do it myself'. But there's countless ways to approach re-solving a completed, documented problem, and it's ultimately a personal choice at what point you're willing to call it your own. I could 'build a gameboy emulator' by reading somebody else's code on github, typing it out character-by-character, and saying i did it. I could also 'build a gameboy emulator' by purchasing a gameboy and all the equipment needed to de-cap the chips on its PCB, then dedicate years to understand the hardware from physical analysis alone, and finally implement that model in software. These are not remotely the same thing!
In practice, I told myself I wanted to learn in while building the emulator, and I feel like it was my emulator, not a re-write of somebody else's. At the same time, I didn't want to re-invent the wheel from first principles. What I arrived on was allowing myself to use AI to answer isolated questions and hunt for bugs, and after finishing a given section of the system, I'd check my naive architecture against other emulator implementations and make improvements. From these checks, I learned about things like storing each opcode as as an array of functions w/ prefilled parameters for O(1) access to every possible unique CPU instruction. This is a super powerful optimization I might've never discovered working from first principles. But I felt a bit guilty. If I had to read somebody else's code to discover it, is it really mine? Or does this part of the emulator belong to somebody else?
This question -- what can I call my own? -- feels particularly important in the age of AI. It's easier than ever to access the work of others (call that the AI's work, call it the work of those whole built the material the AI was trained on -- either way, it's not your own). AI will generate a stream of it at the press of a button. I could probably get AI to write my entire emulator for me. But I don't learn anything from doing that. It definitely wouldn't be mine.
With respect to AI, our cultural scripts haven't updated to handle ownership over AI-generated content. We seem to mostly agree that students getting AI to write essays for them from is cheating, both in the sense of being analogous to older methods of 'cheating' like getting somebody else to write the paper for them, and cheating in the sense of cheating themselves out of whatever they were supposed to gain from writing that paper. The paper is not theirs.
But when a programmer gets an AI to write boilerplate they fully understand, can they call it their own on the basis of their understanding? What about if the AI instantly catches a bug that would've taken the programmer an hour to resolve? What if it's an optimization the programmer might've never considered themselves?
I revisited the emulator project today, opening the repo in cursor. I wanted to use AI to translate the project from javascript to typescript. This went pretty smoothly, but at the end, I felt totally alienated from what I was looking at. Sure, I understood the code -- but whereas before every line came from sweat and tears, I really got it, now there were whole swaths of code lacking context, memories, ownership. Is this how old-school assembly programmers felt the first time they wrote something in C, or how old-school C programmers felt the first time they tried a modern IDE? I don't know. The result wasn't bad. But it wasn't mine.
I'm not going to give up on using cursor. There's plenty of projects where personal, emotional ownership isn't a priority, and deep understanding is more a luxury than a requirement. AI generation is wonderful for this. But similarly to how a mathematician might re-prove known theorems just for his own pleasure, or a carpenter might love challenging himself to make a piece of furniture with simple tools, I expect I'll occasionally want to program something that I can feel entirely confident is my own.
With respect to this, I'm more likely to continue my emulator from the handwritten javascript branch, rather than the AI-generated typescript variation. It's a personal project without a deadline, something I have the luxury to take my time on. When I look at all the ideas rendered in that javascript, I enjoy remembering how they came to be. When I think back on opcode tables, I can recall the a-ha moment where i clicked -- "trading off space for faster opcode fetching, that's brilliant" -- and I know, concretely, that I own that memory. Even if I didn't discover the concept from first principles, I know what it looked like to me before I understood it, and I know what it looks like to me now that I do understand it. Regardless of it the idea itself 'is mine', that remembered process certainly is.
This is where I'm comfortable drawing the distinction, resolving the hanging anxious question of "what's mine?". To possess an idea, there has to be a felt before and after, a meaningful internal transformation. Nothing is intrinsically gained from pressing "enter" after typing out a ChatGPT prompt -- it's purely an external change. But maybe reading that response, asking a few follow-up questions, catalyzes internal change. Nobody else can tell you if that's happening or not, meaning here's a level of tough self-honesty required. You're the only person who can ultimately answer -- what's yours?