How to vibe code a website in React and Next.js
Building a site mostly by yourself (with a code assistant) is a new adventure, especially if your background is backend only. Not surprisingly, there are pitfalls and, yes, plenty of wasted tokens along the way. The good news: compared to backend, LLM coding assistants are excellent at producing front-end boilerplate, data+visualziation, and iterating quickly. I’m documenting what worked (and what didn’t) so others can avoid common traps—and maybe apply the same playbook to backend tasks, too.
LLM, Wish and Bayesian
In D&D fantasy literature, the Wish spell grants what you ask for, but sometimes in a hilariously unintended way. For example, if you say "I want to become the strongest person in the world". The spell will teleport you to an empty plant and you are automatically the strongest.
LLMs share that vibe: they follow your request, but take the "path of least resistance". Intuitively, they default to the most probable solution given their training data, the built-in prior, from Bayesian perspective. Thus, it is important for user to understand this prior, and the minimum effort needed to influence it to get to a desirable posterior, by prompting.
Generic vs specialized code assistant for front-end
Generic code assistant
For a generic solution, it is basically a coupling of IDE + LLM. In terms of IDE, I personally tend to use JetBrains, and it can access major LLM, including GPT5 and Sonnet 4. GPT5 in my own experience is also pretty good.
Qoder is an emerging option based on VS Code (not sure about underlying LLM). It actually works reasonably well.
Context Rot is a known problem. Especially, if assistant wanders on the wrong track, don't fight it, just start anew.
Specialized code assistant for front-end
Figma AI, Vercel V0 and CloudFlare VibeSDK are specialized for front-end. Figma AI appears to be able to use its own visual boilerplate. Both of them are cloud platforms to host code and even deployment process, so how it works is also different from local code assistant.
What coding assistant handles well, for front-end
-
Understand your code, even if it doesn't fully work, and migrate/re-wire it.
-
Update multiple files consistently and write up a summary (so later on you can feed it back as context).
-
Handle React hooks (state, effect, and visualization) properly. Occasionally can be buggy but generally easy to fix.
-
LLM understands Next.js convention about routing, "layout" and "page". Knowing this can still be helpful for debug purpose and scaffolding a project yourself.
What coding assistant tend to struggle (and how to help)
The lessons here tend to be what LLM won't directly tell you.
Framework use
LLM tends to write up code itself, without using existing frameworks like Ant Design, Monaco Editor.
Here, your familiarity with existing framework is extremely useful and you can directly specify a component so the code-gen will be more predictable. Also, given familiarity, you can make minor change yourself without asking LLM.
If the framework is not very well known (not much info on website, so outside of internal knowledge), you have a good chance of getting hallucination, so providing a link can help.
You can ask LLM to confirm if they understand or whether they have any question, before writing any code.
Reusability & componentization
If LLM is asked to write code for similar tasks 1, 2, 3, LLM won't proactively try to enhance re-usability, but just linearly write them up one by one (understandable from next-token-prediction standpoint). In addition to using existing framework mentioned above, you can ask LLM to write up shared component yourself.
LLM tends to accumulate technical debt very fast. In my own experience, I tend to spend a bit time to debug myself and pinpoint the issue, to avoid invoking a general "wish" or getting excessive code change. If it is hard to debug, you can ask LLM to debug and to write up more log.
A simple pattern - micro-frontend
Micro-frontend is roughly the front-end version of webservice for backend. Code assistant won't proactively tell you about this, unless you ask.
For human, this is only useful for very large website while for small/mid project, monorepo with a good folder structure suffices. However, for LLM, context limit is an important factor. Although it is getting better at avoiding full project read, controlling project size is still useful.
Once you use Next.js, which enforces certain composability and once you have re-usable components, migrating between monorepo and micro-frontend is feasible.
markdown vs html
In a website, if you just need to post long text (like this blog), without much visual effect, markdown is both human and LLM friendly and you can easily convert it to html.
Traditionally, markdown is used to build an entire website. However, the difference here is how to use markdown more dynamically. This approach is still not very popular but I believe it is useful for vibe coding and fast iteration.
Frameworks you can try out: Markdown UI, assistant-ui and markdown-it, MDX.