Adële's blog

I am a smolweb advocate and, sometimes, I use LLMs.

2026-05-02 19:15

I spend a lot of time thinking about simplicity. Fewer dependencies, lighter pages, tools that do one thing well. So yes, it might look strange that I also spend time talking to large language models. Let me explain where I draw the line, and why I think the contradiction is smaller than it appears.

What I avoid

I do not use LLMs to generate images, musics or videos. Not because I am against creativity, but because that kind of generation has a cost I can feel: massive compute, opaque training data, and an output that tends to look like everything and nothing at the same time. I am not interested in that.

More broadly, I avoid using a LLM as a replacement for thinking. The risk is real: ask, copy, ship, forget. That is not a workflow, that is a shortcut to not understanding your own codebase.

What I actually use them for

Debugging a tricky regex for example. Understanding an obscure error message at 11pm. Getting a quick skeleton for a function I know how to write but do not want to type from scratch. Reformulating something I wrote and cannot read anymore.

In short: acceleration, not delegation.

I never use a LLM as a starting point in unknown territory. If I do not understand the domain, I will not be able to evaluate the output, and a confident wrong answer is worse than no answer. I learn first, then I use the tool.

The environmental argument, in context

There is a fair criticism: LLMs consume a lot of energy. Fair. But I think the comparison is sometimes done badly. So true to generate a useless image to decorate a blog post. But false if it avoids other hard tasks.

A single LLM query costs more than a single web search, roughly ten times more, according to estimates floating around. But the right comparison is not one query vs. one search. It is five minutes of focused LLM use vs. one hour of tab-juggling, reading half-articles, and reformulating the same search query twelve times. That math is less obvious.

I am not using this to justify anything. I still try to keep sessions short and intentional. But I think the nuance is worth stating.

The smolweb angle nobody talks about

Here is something I did not expect: LLMs work better on simple stacks.

I work with plain PHP, vanilla JS, semantic HTML, lighttpd, Dolibarr (at work). No heavy framework, no abstraction layers, no magic. And the LLM handles that very well. The answers are short, precise, and mostly correct. No hallucinated method names from a framework it half-knows, no suggestions to install a package that wraps another package.

The more complex your stack, the more the LLM struggles, and the more back-and-forth you need. Every iteration has a cost. A simpler stack means shorter sessions, fewer tokens, more reliable answers.

So in a way, choosing the smolweb is also choosing a better LLM experience. Not because I optimized for that, but because simplicity compounds.

I always try to understand what it gives me

This is probably the most important thing I want to say.

Skill erosion is real. If you let a tool think for you often enough, you stop thinking. I have seen it happen. The LLM writes the function, you run it, it works, you move on, and three months later you cannot explain what it does.

My rule is simple: I read what it generates. I understand it before I use it. Sometimes I rewrite it my way. Sometimes it teaches me something I genuinely did not know, a cleaner approach, a built-in function I had forgotten, an edge case I had not considered. That is useful. That is the LLM as a fast colleague, not as a replacement for thinking.

And sometimes it is wrong. Not hallucinating exactly, just... slightly off. Missing context, outdated assumption, plausible but incorrect. I catch it because I read it. If I had just shipped it, I would not have.

One more thing: where the query goes

I run Ollama at home on my own machine. For personal projects and exploratory work, that is my first stop. No data sent anywhere, no privacy concern, no dependency on an external service when I can.

For more complex tasks I use an external model such as Claude, and I am conscious of what I send. No client code, no credentials, nothing sensitive. That is a deliberate choice, not an afterthought.

So

I am a smolweb advocate who uses LLMs. Not to generate fluff, not to avoid thinking, not as an oracle. As a tool, fast, sometimes very useful, always verified, occasionally wrong.

The key is knowing what you are doing with it.

Discuss about it on the Fediverse