On Alternative Internet Protocols

2025-08-08

or, why this is a website.

I'm going to write this post assuming you already basically know what I'm talking about. If you don't know what the hell a "gemini" or a "gopher" is, you should look them up first. Make sure to specify that you're looking for "gemini (protocol)", since google has copied the name for their LLM. Some people think they did this on purpose, much like the "disney frozen" conspiracy, to bury search results. This doesn't make any sense because it's literally google, the company that control search results. They could bury whatever they wanted without having to resort to tricks. Also, the gemini protocol even at it's peak was extremely obscure and never posed any threat of stealing userbase away from the web or google.

There are five categories of alternative protocols:

Gopher and Gemini

Look, I think they're cool. I look around there sometimes. I like the philosophy of trusting clients over servers. I like the minimalism. The development of the web has been captured by large corporations and government entities. One look through the list of w3c members should prove this to you instantly. The web has been privatised and sold off from under our noses. And these people aren't doing nice things with it. They're implementing surveillance and DRM and other anti-features. Bitreich have an excellent spoof page, demonstrating how absurd we'd find it if these web technologies required manual action from the user rather than taking place automatically, you can find it here gopher://bitreich.org:70/1/tracking. It also demonstrates how the simplicity of gopher protects it's users.

As for gemini, I'm going to just quote a post I read here:

"Gemini solved a simple problem, that is only a problem for some folks: gophermaps suck. They do. I get where they come from and the history, but they are just not nice to work with. Add to that the fact that they are really designed just for menus, and that info lines are, themselves, a sort of hack. So for me, the big things that gemini provided are:

A. Regular URL/URI with no "gophertype"

B. A better text markup system (gemtext) that is both menu and document at the same time

I, for the most part, do not care about client certs,TLS, etc. I do not really care much about "apps". I firmly fall into the camp that wants distributed community via long(ish)-form writing."

I fall into the same camp as sloum here. Sure the interactive apps on gemini are fun, Station and Astrobottany or whatever, but the real selling point is that it does what the web was originally supposed to do, serve text documents. For writing those text documents, gemtext is just infinitely more usable than gophermaps. So, why are we in the web now?

Why the web

First as a matter of strategy, second as a matter of features.

Gemini and Gopher are simply never going to catch on. The majority of people don't even know the difference between "the web" and "the internet". They don't know what that "http(s)://" in their url bar means. Hell, a very significant chunk of people don't even use browsers, they just think there are "apps" which are tik tok, instagram, etc. I think people should learn these thins, there should be efforts to undo de-skilling. However, just on a practical level, the cultural knowledge of what a "website" is still exists. Everyone knows that websites exist, and people have an idea that some people "have a website". Getting people to go to your website is just a much easier sell via one hyperlink than trying to explain what an internet protocol is, which one you've chosen and why, how that protocol works, which browser to install so they can view your page, and so on. Unlike these other protocols, the web is already in wide usage. There is no need to bring anyone anywhere. I don't have to do advertising and PR for an internet protocol.

The web already has all of the good stuff in it, it just also has the possibility for bad stuff as well. This is really not ideal, but it can be curtailed. The design of the web, as much as the w3c would like this not to be the case, pushes in the direction of two principles. Firstly, that the basic components work even when the bloat is disabled. You can turn of JS in your web browser and still render html and css, even if the functionality of the site is broken. You can turn off css and the html will still work, even if the formatting is broken. Secondly and crucially, as much as they HATE this simple fact, you are ultimately in control of what your web browser does. Web DRM is their biggest attack on this principle, they'd get rid of it if they could. But I can modify any website as it appears on my computer however I see fit. Which means I can block ads and trackers to some extent, for example. Just remember that every time you do this, a google employee cries.

Who would win: 100 bajillion dollars hand delivered to the web consortium by Sundar Pichai to enclose the web VS one "inspect element" button.

Obviously I'm being a bit hyperbolic here but my point stands, a lot of bullshit of the web can be mitigated via operations ranging from ad blocking, to a plugin like uMatrix, to disabling javascript, and so on. We shouldn't need to do this, and that's the point of these "smallweb" alternatives like gopher and gemini. It's also the point of creating simple, static, text-based websites like this one. Both approaches solve the same problem, neither is ideal, but as I explained, getting over the network effect barrier is simply easier and better on the web. Even if there were no "small web" or "indie web" movement, the web would already still have a vast quantity of high quality websites outside of corpo-space, because the web is so big and has been in use for so long. There are enough people who never stopped blogging, or never left forums and boards, that the issue of convincing people is minimized.

On the second point of features, primarily my issue is with web design. The very fact that can do such terrible things with web design, makes it that much better when you find someone doing it well. While I am intensely in favour of brutalist web design, that's not to say I think every website ought just be a text file with no further style. The goal of brutalism is to create visual interest through other means; in architecture this is by geometry and shadow, in web design this is also by the same principles, appearing as formatting, font, organisation, silhouette etc. Because, browsing gemini or gopher spaces, after a while, it becomes monotonous. For some people that's the point, they want to get offline, they don't want something which keeps their interest because the fact that the internet keeps their interest is what their trying to avoid. I respect that, but it isn't my usecase. I don't like the repetitiveness of gemini capsules all looking the same. I like coming across a new website and exploring it's unique design, and through that learning something about it's author. Of course I still think the philosophy of "content is king" and "text first" applies. It's just nice to have variety. If you don't like that, you can always use your web browser's reader mode.

There is also a problem in content, which has been identified already by these communities. A lot of the content on gemini tends to be about gemini. Even beyond that, the whole place skews very heavily towards tech discussion. The only people willing and able to overcome the technical barrier to entry to even be there in the first place are going to be techy people who care specifically about internet protocols. The variety of discussion is limited, compared to the vastness of the web where you can stumble across some guy's blog he's been running since 2004 about lepidopterology, and then find and entire lepidopterist community, and explore it and so on. As a reader, the experience of gemini is reading gemlogs which all look roughly the same and talk about roughly the same things.

The philosophical difference between this part of the web and gemini / gopher is as follows. Should we avoid the bad parts of what we already have, or should we jump to a new system in which those bad parts are impossible. It sounds like a classic reform vs revolution debate. But if you look closer, it's actually not. It's more like a question of, "me and my friends have already moved into a commune in the woods. We currently grow our own tomatoes. Should we start growing potatoes and feeding ourselves entirely off our own land, or are we ok making a trip into town to buy a bag of rice from time to time". In other words, we have both already stopped engaging with certain systems. To what extent should we engage with other adjacent systems where they're useful? Where the advantage on the one hand is a degree of self sufficiency, in exchange for more work and more limitations.

Ultimately, neither gemini / gopher or simple websites are "revolutionary", they don't target the underlying structures that create the possibility for the web to have ended up like this in the first place. That is to say neither pose any threat to capitalist property relations. They are just an exit, at best a prefiguration. Gemini is not a threat to Google, it's not mitigating Google's surveillance, because Google will never come to gemini. But in that case, Google also will never come here. We are creating an alternative for those who want it, creating different ways of Being Online. Ultimately you're still dependent on the physical internet infrastructure owned by megacorps. You're still dependent on your ISP.

What about p2p?

I used to be of the opinion that the client-server model was fundamentally flawed because it's hierarchical. I was wrong on two counts. Firstly, hierarchies are sometimes useful. For example online multiplayer games are much better on a client-server model than a peer-to-peer model, where the server has final say. It solves problems with lag and with cheating. Secondly, I was wrong because the client server model actually isn't hierarchical to begin with, it's just modal. It's perfectly possible, as in the previous case of gemini, to privilege the client over the server.

That being said, it's possible that there are instances where p2p architecture has advantages, mainly due to their resilience. If a law passes that requires companies to scan messages on their service, not using a service in the first place gets around that issue nicely. It's another step on the road towards an exit from reliance on megacorps. But what do these options even do? How do they differ from one another? How useful actually are they, and to what extent to they really create the possibility of exit? I will be investigating this more in future posts where I take a look at some of these technologies and their usecases. This will include p2p chat programs like GNU Jami, a look at reticulum and nomadnet, and if I can, I'd like to see what's going on with mesh networks, which I know very little about. So give me some time to explore what the hell is going on here, and I'll see you then.