Philosophize all you want; if the first instruction in your manual is 'pip install' I don't consider you to be anywhere near as offline / off-grid as you claim to be. All the Lora mesh projects do this. For all the off-grid advertising they do, there doesn't seem to be a lot of thought put into bootstrapping or maintaining the network once the internet is gone. (yes, you and I could probably figure it out, but some user who actually needs this might not be able to). I'm not really complaining about this, but it is a little ironic.
Reticulum is actually ahead of the curve by having a ready to use PDF manual you can download. For my part, I've been trying to put together an all-inclusive Raspberry Pi image or a live USB for Meshtastic, but it's not quite there yet (it's no more than a hobby for me, but I'm not making big off-grid promises either).
"The internet no longer exists" is a particularly extreme subset of off-grid scenarios. For the more plausible off-grid scenarios—the ones that have actually happened—the unavailability of the internet has been varying degrees of localized and temporary. In that context, being able to bootstrap the entire network without any reliance on internet infrastructure is more of a convenience than a hard requirement.
In particular, it seems obvious to me that any preparedness plan that requires a user to acquire in advance specialized hardware (eg. a battery/solar-powered long-range radio of some kind) to be used with an off-grid network can reasonably expect that user to also be prepared with the software to drive that hardware.
The whole project is a convenience. If I were in a situation where I actually had to rely on Meshtastic for comms, I'd be pretty nervous. It doesn't really work that well. Luckily, I've only enjoyed Meshtastic recreationally. Where this comes from is from me trying to learn about and set up some nodes on vacation in an area with very limited internet. I followed the tutorials, thought I had what I needed, but I was wrong. Woops, documentation is online. Within the community, I've seen "that same thing happened to me" more than once.
As with many hobbies, this is a "just because I can, I will" type of thing.
What you don't think we can put up a shadow internet running at 250kbps?
That said, I picked up a couple of prebuilt lora solar nodes and a couple of mobile nodes (seed solar jobies and seeed mobile jobies) and stuck the solar ones into my upper story windows just over new years, one is set up as a meshtastic repeater the other as a meshcore repeater.
I'm pretty amazed at the distances I hear from, I'm getting stuff this morning over meshcore all the way from vancouver bc into my office in seattle (pugetnet.org).
To get it all dialed in having a discord full of old HAM guys that know RF pretty well certainly doesn't hurt.
It's certainly hobbiest grade at best. It seems like it could be very interesting for installs in small communities and larger estates for backhaul for remote iot applications. Obviously you aren't going to push video over that bandwidth but for weather stations and the like seems cool.
Reticulum becomes more interesting when you are talking about some of the more robust radio technologies. Building a mesh LAN out of old wifi gear is interesting in concept.
Ha! I believe every RNode can be used to bootstrap a Reticulum network, as that tiny ESP32 hosts the RNode firmware, the full network software stack and documentation! The RNode has the capability to become a Wifi access point, if you connect you get this at 10.0.0.1.:
Just so I understand, what sort of bootstrap process are you looking for? Even a pre-built binary is going to require a download? If you build from source (e.g., C), you’re going to need to download the source code and a compiler. I’m not much a Python guy myself, but installing via pip doesn’t seem particularly bad. But I’m probably missing your point.
My goal is to be able to do all steps of the typical Meshtastic youtube tutorial without access to the internet.
I like to liken it to my other hobby of retrocomputing. In the old days, your whole OS and all the applications ran from a few floppies and a couple of books for documentation. If you need to duplicate the environment, just make copies of your disks. And of course you need an original set to start with. But nobody thinks of that as "offline", that's just the normal way it works, and yet it seems more offline than modern projects who claim to be offline.
If you’re pip installing you can just toss your venv on a floppy/cd/flash whatever if you’re so inclined. I’m not sure I understand the concern. The need for internet is a sliding scale with your own resourcefulness.
Reticulum and Nomadnet should have been ported to Go long ago; you can bundle all the dependencies
under a zip/tgz file just in case, and provide static binaries for everyone. NNCP, Yggdrasil... every portable project uses sane choices. With Python you need Pip, pinned releases and a beefy machine.
I’ve looked at a few of the LoRa-based mesh network systems over the last couple months. They all seem to have a philosophical document of some sort, like this one, sometimes buried as part of the user docs, but none of them have clear protocol specification docs. When I look at their node maps, the node counts are absurdly small (like 20 nodes in a city of 1 million people). I suspect each of them has major scaling issues. Sure, mesh networks are great because they are more resilient, but if you trust nobody and you have no sense of a route to a destination, you’re left with flooding as your primary next-hop selection method, which means you’re going to be about as scalable as an old Microsoft LAN Manager network was in 1995 (which is to say not very). Short of reading the code, does any sort of protocol documentation (or better yet, analysis) exist for Reticulum?
> but none of them have clear protocol specification docs
This is a big turn off for me. I have seen it for a number of protocols beyond mesh ones. ESP-Hosted does this too. So does ELRS. Maybe I'm too used to reading data sheets etc, but if your protocol requires a specific implementation, I am put off by the friction: I must integrate your software, in the language you used, and will likely hit compatibility problems as a result.
Exactly. And are you going to break it one day just because and force every node to run the new version of firmware to stay active. I like implementers who show some transparency and some self-reflection about the limits of the protocol. And to be clear, in looking through the other docs in the Reticulum repo, it looks like there are more protocol details there, so maybe Reticulum has that transparency.
Reticulum has no bandwidth management. If your node or the whole local area is flooded with traffic from the rest of the network there is nothing you can do.
"Reticulum does not include source addresses on any packets" and with that you cannot throttle passing-through traffic based on source. Any hope of scaling is gone.
Meshtastic is the most popular in my area. I can see about 150 nodes from my house in a US county of ~300K (though actually being able to talk to most of those remains questionable).
I am certain the popularity of Meshtastic is down to how easy they have made it to onboard. Buy the module, flash using the web flasher, install the app on your phone, done. There's a Youtube tutorial on every street corner for this, even though I (and seemingly many people) don't find Meshtastic to be all that reliable.
To be clear, I’m not harshing on Meshtastic’s node counts. In some sense these systems are like packet ham radio. They appeal to a specific user base. I just question how their protocols are going to scale, and even 150 nodes is extremely small compared to the number of people around you who are using the Internet, WiFi, etc.
So, for instance, at the URL you referenced, it says at the bottom:
> As meshes grow larger and traffic becomes more contentious, the firmware will increase these intervals. This is in addition to duty cycle, channel, and air-time utilization throttling.
> Starting with version 2.4.0, the firmware will scale back Telemetry, Position, and other ancillary port traffic for meshes larger than 40 nodes (nodes seen in the past 2 hours) using the following algorithm:
> For example an active mesh of 62 nodes would scale back telemetry.device_update_interval to 79.5 minutes instead of the 30 minute default.
It looks like they are already building back-off strategies as the net scales, and that starts to happen at very low node counts (just 40). So, what happens when node counts hit 500 or 1000? Again, not trying to throw stones; just trying to understand how far these protocols can go and how they degrade/fail as they scale. Ideally, they don’t fall over and even possibly get more robust (with more nodes, there are typically more topological connections between nodes, which provides more possible paths and resiliency).
Reticulum is absolutely not flood routed and is not "LoRa-based" lmao. Typical hn comment.
Planetary-scale networks is mentioned as a design goal on the first page of the docs https://reticulum.network/ which are hidden at the very top of the git repo.
Okay, maybe Reticulum isn’t strictly LoRa-based, but others are (e.g. Meshtastic), and while Reticulum works over lots of physical layers, the README specifically states “An open-source LoRa-based interface called RNode has been designed specifically for use with Reticulum.”
Exactly. Okay, that’s a great claim. How? Also, “planetary scale” is meaningless. With the right node count (low), topology, and radios, just about any mesh network can achieve “planetary scale.” But that doesn’t mean it’ll support 10 thousand users, never mind millions. There are underlying technical reasons that the Internet works the way it does.
> The mental maps we carry are dominated by a single, misleading image: The Cloud.
> To break free of the center, you must also let go of the concept of the "Address".
When I was still dealing primarily with on-prem networks in regulated environments (or cloud networks stubbornly architected in a fashion similar to on-prem ones) I worked with a lot of people that could not and would not ever understand this. It's not just a cloud thing. Some people just cling to using IP addresses for everything all the time. They don't understand why trying to access the JIRA server via IP wouldn't work because they didn't understand SNI let alone a Host Header. Dynamic record registration and default suffix settings are nothing more than a section of settings to be cruised over during clicked-in configuration. Zones can and should be split without regard for architecture or usage. Et cetera.
My theory is that because these people didn't understand Layer 7 stuff like HTTP or DNS they just fall back to what they can look at in a console (Cisco ASA, AWS, or otherwise). IPv6 will simplify a lot of the NAT stuff but it won't cure these people of using network addresses as a crutch. Not really sure what the systemic solution is - I was like this once but was fortunate enough to be task with migrating a set of BIND servers to the cloud and so learned DNS by the seat of my pants. Maybe certification exams should emphasize this aspect of networking more.
> The internet we rely on today is a chain of single points of failure. Cut the undersea cable, and a continent goes dark. Shut down the power grid, and the cloud evaporates. Deprioritize the "wrong" traffic, and the flow of information is strangled.
The deep brokenness of the current internet, specifically what has become the "cloud" is something I've been thinking about a lot over the past few years. (now I'm working on trying to solve some of this - well, at least build alternatives for people).
and this:
> The way you build a system determines how it will be used. If you build a system optimized for mass surveillance, you will get a panopticon. If you build a system optimized for centralized control, you will get a dictatorship. If you build a system optimized for extraction, you will get a parasite.
Seems to be implying (as well as in other places) that this was all coordinated or planned in some way, but I've looked into how it came to be this way and I grew up with it, and for me, I think a lot of it stemmed from good intentions (the ethos that information should be free, etc.)
I made a short video recently on how we got to a centralized and broken internet, so here's a shameless plug if anyone is interested: https://youtu.be/4fYSTvOPHQs
But the part about the undersea cable is simply wrong! Major undersea cables have been disrupted several times and never has a "continent gone dark".
I think this betrays a severe misunderstanding of what the internet is. It is the most resilient computer network by a long shot, far more so than any of these toy meshes. For starters, none of them even manage to make any intercontinental connections except when themselves using the internet as their substrate.
Now of course, if you put all your stuff in a single organization's "cloud", you don't get to benefit from all that resilience. That sort of fragile architecture is rightly criticized but this falls flat as a criticism of the internet itself.
People naturally want to maximize the value they extract from any system.
If you hand individuals or groups the internet, they will naturally use it for spam, advertisement, scams, information harvesting, propaganda, etc - because those are what gain them the most.
The 'enshittification' if the internet was inevitable the moment it came into existence, and is the result of the decision of its users just as much as any one central authority.
If you let people communicate with each other on a large scale at high speeds, that's what you get.
The only way to avoid the problem is to make a system that has some combination of the following:
* No one uses
* Is slow
* Is cumbersome to use
* Has significant barriers to entry
* Is feature-poor
In a such a system, there's little incentive to have the same bad behaviors.
We'd probably agree on this: people respond to incentives created by system design. One example that comes to mind is how London's Congestion Charge and how it has changed traffic behavior over the years depending on how the rules change.
There is nothing inherent about fast, large-scale, or user-friendly communication that forces spam, scams, or propaganda. Its just that those outcomes emerge when things like engagement, attention, or "reach" are rewarded without being aligned to quality, truth, or mutual cooperation.
This is a well-studied problem in economics, but also behavioral science and psychology: change the incentive and feedback structure, and behavior reliably changes.
Based on the studies I've read in and around this topic, I think harmful dynamics are not inevitable properties of communication, but really contingent on how each system rewards actions taken by participants. The solution is not slowness or barriers, but better incentive alignment and feedback loops.
Not sure I follow the allegory, could you substantiate?
I'm not sure specifically e.g. why being an engineer would put someone at an outsized disadvantage against the already hopeless notion of "understanding how the world works [in its totality?]".
One would think being smart and educated would put them ahead of the pack, even if they overestimate how smart and educated they are compared to others, or fall victim to the consequences of that - an accusation engineers commonly recieve on social media, with similarly high suggestiveness, and similarly little substantiation.
If creative people don’t think at a systems level or a political intersectional level when doing design then they will completely ignore or miss the fact that engineering is a subset of a political or otherwise organizational goal
The key problem with most engineers is that they don’t believe that they live inside a political system
I think that's an important consideration, especially with telecommunications technologies, but the author seems to have been pretty mindful of that angle from the get go, i.e. they seem to have been frustrated with the state of affairs from the beginning.
Or do you mean that to you it all reads as yet another case of someone thinking their technology is what's going to right the ship that is society's current trajectory, then bailed when that didn't come to be? Cause while I can certainly see that being the case, I'd say such a cycle is as much desperation as it it naivety. I think this is even reflected in it being a PHY-agnostic thing, meaning as far as an effort into anything goes, it's a fairly enduring one.
Too bad the Zen of Reticulum is against freedom. Specifically freedom 0: the freedom to use the software for any purpose. Its restrictions preventing it "from being used in systems designed to harm humans" prevents it from being used in e.g. militia groups in oppressed countries who may wish to use it to harm humans in self-defense.
A) In self-defense, you don't intend to harm humans, but are only doing so when it's down to your life or theirs. So such a system could be argued to not be designed to harm humans, but instead preserve your own life.
B) In any case, I'm OK with it. Having the software explicitly licensed like this may prevent it from being legally considered a terrorism tool or munition if a bad actor were to be found connected with it, and if that happens, that's going to have much more freedom-restricting consequences with respect to the software.
Thank you very much! I also feel that the impact of software licensing on violent groups behavior might be low.
It is, however, interesting on principle, since it only allows the use by criminals (implicitly), and not by law enforcement. By then making the tool very impractical to use, we can punish bad actors still.
(I think there was a honeypot operation to this effect, something with feds making up a "secure encrypted phone" and then acquiring Cartels as a major customer.)
(On the off chance I just burned this very similar operation: dear feds, I'm so sorry!)
Yep. People wanting to do the right thing and prepare at scale for self-defense won't violate the license but the people it intends to restrict will, because they don't care.
It's such a strange and unfortunate addition to the project. Also, what's the point of assuming every entity is potentially hostile? Can't you just put in the license "you're not allowed to be malicious or hostile on this network"?
GP used intentionally hostile and weird interpretation of "if you intend to subdue, enslave or kill, don't use it", aimed at dictatorships, organisations like Palantir, etc.
What? If I'm in a militia group that resists oppressive governments, I intend to kill people. Therefore I'm not allowed to use Reticulum to coordinate with my mates, even though most people would agree that I'm not killing immorally.
I love that Mark Qvist publishes his strong opinion, view of the world, and goals.
It quite smells like the hacker spirit of the 80s, mixed with a little spiritualism and anarchism. Very refreshing after so many other people are just disillusioned, worn out, angry, or frightened.
Not much "Zen" when it barely works under i386 (Atom n270 CPU), 1GB of RAM and a simple 80x25 terminal. Sadly, even my usual one (100x32/36) isn't good enough to please retrowannabe hipsters.
At some point we will be so tired of distinguishing between AI generated content and human content that we will stop using the Internet and it will be left to bots.
Reticulum is actually ahead of the curve by having a ready to use PDF manual you can download. For my part, I've been trying to put together an all-inclusive Raspberry Pi image or a live USB for Meshtastic, but it's not quite there yet (it's no more than a hobby for me, but I'm not making big off-grid promises either).
reply