All posts
March 5, 20267 min read

Why I Built Flow Into Code in 2026

My motivation for building a technical interview practice environment and what I learned along the way.

Next.js
TypeScript
AI
Interviews

I know what you're thinking.

LeetCode in 2026?
Big tech companies are transitioning to allowing AI coding tools in their interviews.
Software engineers are now orchestrating agents instead of resolving syntax.

Who exactly is this for?

Well... myself, primarily.

I built it while preparing for interviews, and many of the design decisions came directly from frustrations in my own practice sessions. But hopefully a few more people find this tool helpful.

DSA Today

I don't deny that AI coding assistants are progressing at a rate we simply did not expect a year ago.

This has led many big tech companies to scrap the conventional DSA-style interview in favor of assessments that navigate a codebase with the help of AI tools.

But I don't see that as a reason to stop practicing DSA altogether.

I believe foundations are always valuable in any discipline.

Software engineers are now able to produce code with LLMs at an increasing rate. That means the amount of code they need to read and understand is also growing rapidly. LLMs are trained on all manner of code snippets across the web, so the more exposure you have to different algorithms and patterns, the easier it becomes to interpret and follow your agent's work.

As a junior engineer, it's often hard to ignore the noise of the community touting the next newest technology. It is very easy to focus only on what seems cutting edge.

In my experience, the real world moves more slowly than we think.

There are plenty of companies still operating on older systems and frameworks. Similarly, there are plenty of companies that still evaluate candidates with DSA-style online assessments and whiteboarding sessions.

So is the bar shifting?

Definitely.

But not to the point where learning and practicing DSA has become pointless.

The Problem I Couldn't Ignore

When I started preparing seriously for software engineering interviews, I noticed that most coding practice tools focus on getting the correct answer.

You open a problem, immediately write code, run the tests, and see whether it passes. If it fails, you tweak the implementation. If it succeeds, you move on.

But real interviews don’t work like that.

Interviewers aren’t just evaluating whether your code compiles. They are evaluating how you think.

They want to see how you:

  • clarify the problem
  • identify constraints and edge cases
  • reason about possible approaches
  • communicate tradeoffs
  • structure your solution before coding

I was fortunate to learn this early during my time at Rithm School's web development bootcamp.

But knowing this didn't automatically make me good at interviews.

The second problem was practicing the communication itself.

I've done mock interviews with friends where I could talk through my entire thought process, write pseudocode, and eventually produce a working implementation.

But it always took way too long.

My existing practice methods simply didn't replicate the time pressure of a real interview, combined with the need to slow down and clearly explain your reasoning.

So I decided to build something closer to the experience I actually wanted.

What Flow Into Code Is

Flow Into Code is a structured practice environment designed to mirror how technical interviews actually unfold.

Instead of jumping straight into coding, each practice session moves through stages like:

  • understanding the problem
  • clarifying assumptions
  • outlining an approach
  • writing pseudocode
  • implementing the solution
  • analyzing complexity

When you use this structure, you'll often find that the code naturally falls into place. The edge cases will be apparent, the tradeoffs have already been considered, and the algorithm structure is clear. Writing the code becomes a simple translation of syntax. The goal is to train the problem-solving process so it naturally flows into the code.

Personally, I wanted this structure to condition myself into a repeatable way of approaching new problems. It also ensured that I consistently covered the aspects an interviewer is typically evaluating.

The AI chat interface on the platform plays the role of an interviewer you can practice explaining your reasoning to.

It does not solve the problem for you. Instead, it provides small nudges or hints that push you in the right direction.

Finding that balance was important when designing the tool. This is a learning tool first and an interview simulator second. Easy access to small hints is far more productive than the all-or-nothing alternative of immediately looking up a full solution.

Reframing Algorithm Problems

Another common frustration with LeetCode is that the problems can feel detached from real work.

For junior engineers with limited industry experience, this disconnect can create an inaccurate mental model of what software engineering actually looks like.

The reality is that many LeetCode problems are simplified versions of algorithms that power real systems: streaming pipelines, caching layers, job schedulers, and log processing.

But making that connection isn't always obvious.

So Flow Into Code includes alternate problem framings that reinterpret algorithm questions through different perspectives.

For example, a sliding window problem might also be presented as:

  • processing events in a streaming analytics pipeline
  • detecting anomalies in time-series metrics
  • analyzing request logs in a monitoring system

The algorithm stays the same. The framing simply connects it to real engineering contexts.

That shift makes the problems feel less like isolated puzzles and more like engineering decisions.

Hopefully it makes LeetCode feel a little less like a chore and a little more like meaningful practice.

Why Build This Now

AI coding assistants have become extremely good at writing code.

If you paste a problem into most assistants today, they can produce a working solution almost instantly.

That's incredibly useful for productivity, but it introduces an interesting problem when you're trying to learn.

If the AI always gives you the answer, the feedback loop disappears.

So instead of asking how AI could replace thinking, I became interested in the opposite question:

What if AI could help strengthen the learning loop instead?

Instead of generating solutions, an assistant could:

  • challenge unclear assumptions
  • prompt you to think about edge cases
  • react to your reasoning process
  • help you structure explanations more clearly

In other words, something closer to a mock interviewer or practice coach.

What I Learned Building It

Building Flow Into Code ended up teaching me more than I expected.

The original goal was simply to create a better practice environment for interviews. But along the way, the project became an opportunity to explore parts of software engineering that I hadn't previously worked with in depth.

For example, I used the project as a chance to experiment with production-oriented tooling and workflows. Even though this is a personal project, it felt like a good opportunity to gain experience with things like containerization, CI/CD pipelines, and deployment automation.

I also spent time exploring remote execution environments for running user code safely, along with writing different layers of tests — unit tests, integration tests, and end-to-end tests.

Another interesting lesson was around UX.

Keeping someone in a "flow state" while also prompting them to reflect on their reasoning turned out to be surprisingly delicate. Too many prompts interrupt concentration. Too few and the feedback loop disappears.

Designing that balance ended up being one of the most interesting challenges in the entire project.

Looking back, this project became more than just a tool I wanted for interview practice. It also became a playground for learning how modern production systems are built.

Tech Stack

Flow Into Code is built with Next.js (App Router), TypeScript, TailwindCSS, and Firebase.

The project also experiments with LLM-assisted feedback loops for learning, containerized execution environments for safely running user code, and a modular architecture that keeps AI services, repositories, and application logic separated.

Building it gave me a chance to explore production practices like containerization, CI/CD workflows, and automated testing while still keeping the project lightweight enough to iterate on quickly.

Try It Out

If you're preparing for technical interviews, experimenting with AI-assisted learning tools, or just curious about the project, feel free to check it out:

flowintocode.com

And if you have thoughts or feedback, I'd genuinely love to hear them.