John Baker
  • About
  • Work
  • Contact

On this page

  • Building an Interactive RNN Visualizer: When Confusion Becomes Clarity
    • Finding a Way In
    • From Spreadsheet to Interactive Tool
    • Why Interactive Visualization Matters
    • The Building Process
    • What This Taught Me

From Confusion to Clarity: Building an RNN Visualizer

Deep Learning
Data Visualization
Projects
Learning and Education
How building an interactive web tool taught me more about Recurrent Neural Networks
Published

February 12, 2026

Building an Interactive RNN Visualizer: When Confusion Becomes Clarity

I’m taking a deep learning course at Penn GSE this semester, and recently we covered Recurrent Neural Networks (RNNs). If you’ve ever tried to learn about neural networks, you may know the feeling: the slides make sense in the moment, the professor’s explanation sounds logical, but when you try to work through an example yourself, everything falls apart.

That was me after class. I understood conceptually that RNNs process sequences by maintaining hidden states that get updated at each time step, but what was actually happening inside those calculations? How did the numbers flow through the network?

Finding a Way In

My professor shared this brilliant article that walks through RNN calculations by hand—no code, no libraries, just the raw math. It helped with my understanding, but I’m a hands-on learner, so I did something that comes naturally to me when trying to understand something: I opened a spreadsheet.

I translated the walkthrough into Google Sheets, cell by cell. Input values here, weights there, formulas connecting them. When I changed an input and watched the hidden states ripple through four time steps, something clicked. The RNN wasn’t just an abstract concept anymore; it was a machine I could see working.

From Spreadsheet to Interactive Tool

But spreadsheets have their limitations. You can’t easily see why a particular value is what it is. You can’t trace the dependencies visually.

So I built something better: an interactive web-based visualizer.

The tool mimics a spreadsheet interface, but adds interactivity that makes the learning experience fundamentally different:

  • Click any input or weight to change it, and watch the entire network recalculate in real-time
  • Click any hidden state or output to see exactly which values contribute to it, with colored arrows showing the data flow
  • View step-by-step arithmetic for any calculation, with color-coding that shows which numbers come from inputs (orange), weights (yellow), or previous states (green)

It’s a single HTML file—no build process, no dependencies, just open it in a browser and start exploring.

Why Interactive Visualization Matters

Here’s what I learned (Relearned? Reinforced?) building this: understanding doesn’t come from reading formulas or watching animations. It comes from playing.

When you can change an input value and immediately see how it propagates through the network, you develop intuition. When you can click on an output and see exactly which three terms combined to produce it, the abstraction dissolves. You’re not learning about RNNs anymore, you’re learning by doing them.

Interactive visualization is especially valuable for RNNs because their key property—maintaining state across time steps—is inherently sequential. You need to see how h₁ and h₂ at time step 1 feed into the calculation at time step 2, which feeds into time step 3, and so on. Static diagrams can show this, but they can’t let you explore it.

The Building Process

I’ve got a confession to make: I didn’t build this alone. I used AI coding assistants to help bring the visualizer to life, but not in a straightforward way.

I started with Claude, but the initial results didn’t match what I had in mind, so I switched to Google Gemini, where I iterated through multiple versions to build the core functionality. The back-and-forth was productive: I’d describe what I wanted (“a spreadsheet-like interface where you can click cells to edit values”), Gemini would implement it, and I’d test and refine.

Once I had a working prototype, I wanted to polish it into something truly intuitive. That’s when I moved to Claude Code. Working with Claude Code transformed the tool from “functional” to “the learning experience I actually wanted.”

With Claude Code, I could focus on the pedagogy rather than the implementation. We refined the dependency tracing, improved the color-coding for clarity, added the step-by-step calculation panel, and cleaned up dozens of small details that make the difference between a demo and a tool people actually want to use.

What made this powerful wasn’t just that AI wrote the code. It’s that I could experiment rapidly. Instead of spending hours debugging why something wasn’t rendering correctly, I could stay focused on the learning experience: How do you make dependencies obvious without overwhelming someone? When should calculations be visible versus hidden? What makes the interaction feel natural?

The result is a single HTML file that works in any browser, with no dependencies or build process. But, more importantly, it’s exactly the learning tool I needed when I was struggling to understand RNNs.

What This Taught Me

Building this tool taught me more about RNNs than any lecture could have. Not because lectures are bad, but because the act of building forced me to understand every detail. I couldn’t fake it. Every formula had to work. Every edge case had to be handled.

And now that it’s built, I hope it helps others the same way Professor Yeh’s article helped me. Learning is weird. Sometimes you need to build the tool that would have helped you learn the thing you just learned by building the tool.

If you’re learning about RNNs, try it out, click around. Break things. Change the weights to ridiculous values and see what happens. Tell me about it. That’s how understanding happens.

Back to top

Made with and Quarto