I recently wrote about moving my development to the terminal. It started as a practical decision, but it led me to a bigger question. If I prefer the terminal because the information is cleaner, more consistent, and easier to read, what does that say about the web? And more broadly, what does that say about user interfaces in general?
Content is what matters
For a large part of the web, the most important thing is content. Not branding. Not visual polish. Content.
But that is not how we treat it. Every company and developer tries to create something that looks better, newer, or more professional. New frameworks, new styles, new ways to present the same information. We spend a lot of time and energy on how content looks instead of what content says.
When I go to a website, I am interested in the content. Not the theme, not the light or dark mode, not the fonts, not the CSS, not the styles. Let me read what is there. Let me use my font. Let me use my theme. Show me the content and let me decide how I want to see it.
Terminal vs web
On the web, every website looks different: colors, fonts, spacing, layout, visual hierarchy, interaction patterns. That flexibility can be powerful, but it also creates friction. You learn how one website works, then you go to another, and it looks completely different. Your brain has to adjust every time.
Some websites are really nice. Great UI, great UX, easy to use. But many are not. Hard to read, hard to find what you need.
In the terminal, everything looks similar. You have one font for everything. Every line has the same height. If you want something like a heading, you do not make the font bigger. You use bold, a different color, or a different background. The colors come from your theme, and you can use the same theme for everything. The ways of displaying content are limited, and that is exactly the advantage. When everything follows the same visual language, your brain does not have to adjust. You read faster. You find information faster. It is less work to process.
If I could browse every website in my terminal with my font and my theme, and everything would work correctly, I would be in. It would be faster, easier, and with fewer distractions. Of course, right now it does not work because nobody designs websites for text-based browsers. But the idea is not new. People have been talking about it for a long time.
Golden Krishna wrote a book in 2015 called “The Best Interface Is No Interface.” The core idea is that the best experiences do not need a screen full of buttons and forms. Back then it was mostly a design philosophy. Now, with AI, we might actually have the technology to make it real.
We tried this before
The idea of letting users control how they consume content is not new.
RSS readers already tried to solve this. You could follow content from any website, and it all appeared in one place, in your reader, with your layout, your font, your rules. It was great. But RSS mostly died because platforms wanted control. They wanted you on their website, seeing their design, their ads, their upsells.
Screen readers already work this way too. Accessibility tools consume the web content-first, ignoring visual design entirely. They care about structure and meaning, not colors and shadows. In a way, what I am describing is a more mainstream version of that content-first experience: less about visual presentation, more about structure, meaning, and intent, but powered by AI.
Interacting through conversation
Let us think about how we interact with apps today.
If I want to buy a cinema ticket, I go to a website, log in or create an account, find a movie, look at the schedule, choose seats, fill in payment details, and click through multiple pages. It works, but it is a lot of steps for a simple goal.
What if instead I could open an app, tap a button, and say: “I want to watch Star Trek on Monday. Are they playing it?”
The AI responds with simple text: “Yes, you can watch it at this cinema on Monday at 8 PM.”
“Book me a ticket for that.”
Done. I do not need a nice UI for this. Just show me the information, let me confirm, and handle the rest.
Now imagine the same for booking a vacation. I activate voice mode and say: “I want to go there for these dates, with 4 people, one kid, one dog, and I do not want to pay more than this amount. Find me something.” The AI queries the service, gets the data, and shows me the options. I say yes, it books everything. One minute and my vacation is booked.
For straightforward tasks, that kind of flow could be much faster than navigating a traditional app. For more complex decisions, AI would still need a clear way to confirm details before acting. But either way, less time spent on things we do not need to spend time on.
Companies as APIs, not as websites
Maybe in the future, companies will not build elaborate user interfaces at all. Instead, they will provide APIs or something like MCPs (Model Context Protocol) that AI can interact with. Anthropic has already introduced MCP for exactly this purpose: a standard way for AI to connect to external services.
You would have an AI on your device. You connect it to a service through a standard interface with a permission system. Then you write or speak to interact with that service. The AI talks to the company’s API, gets the data, and presents it to you the way you prefer. Your theme, your font, your layout.
The company that sells those tickets would have no idea how the information looks on your phone. They just provide the data and the ways to interact. Your AI displays everything according to your preferences. It does not matter if the information comes from one company or another. It all looks the way you want it to.
Imagine opening your phone and every service, every app, every website presents information in the same consistent way. Your way. Not their way.
Of course, graphics are still important somewhere. Games, movies, presentations. But for most of what I do on the web, I am reading text. And most of what AI would need to show me is also text. Maybe for a photo or a movie trailer, it opens full screen with a close button. Simple.
The resistance
This sounds nice, but there is a reason it might not happen easily.
Companies spend millions on branding, on UI design, on guiding users through specific flows. Dark patterns, upselling, keeping you on the page longer. If AI controls the presentation, they lose all of that. They lose the ability to nudge you toward a more expensive option or show you ads disguised as content. That is probably an even bigger obstacle than the technical challenges.
RSS died partly because of this. Platforms did not want users reading their content outside their controlled environment. The same tension will exist with AI-powered interfaces.
Companies could still create optional themes or instructions for AI. Something like: “Here are our suggestions for how to display our data nicely.” But those would be suggestions, not requirements. The user’s preferences would always come first. Whether companies will accept that is another question.
But if AI-mediated access became the default way people interact with services, companies might have to adapt. The competitive advantage would shift away from interface control and toward better data, better pricing, faster fulfillment, and more reliable service.
Privacy and trust
And then there is the other big question. If AI has access to my services, my data, and can make purchases on my behalf, we need to be sure it is used properly.
I do not want to give AI access to my entire bank account. But maybe I can have a dedicated card that I give AI permission to use for payments. We need some kind of permission system, something that gives me control over what AI can and cannot do.
Safety, security, and privacy are only part of the problem. The other challenge is what AI can reliably do for us in practice. We may soon have the technical capability to build systems like this, but capability is not the same as trust. That trust will need to be earned through strong permissions, clear confirmations, and predictable behavior.
Not a circle, a spiral
We went from CLI to GUI, and now it feels like we are going back to CLI. But we are not really going back to the same place. We are going to CLI plus AI, which is something entirely new. It is not a circle. It is a spiral.
Maybe all those graphical interfaces we are creating now will become obsolete. Maybe not all of them, but many. Maybe the only part of development in the future will be creating APIs for AI to consume. No frontend part at all. The entire UI automatically generated on the user’s device by AI. It would be up to the user how it looks.
I know this sounds far away. But if you give me a way to safely ask AI to do things instead of clicking tens of buttons and filling forms, I am in. If you give me a way to do everything in the terminal, with my font, my theme, and no distractions, I will probably be happy.
Let us see where this goes.