Update 17th October, 2024

Most questions and remarks are from people who like clear, technical reasoning. However, some of the answers haven’t been able to convince and remove the uncertainty that is present. This combined with a changed approach is testing people over here.

Very often it’s a combination of things and I do think that’s the case over here:

  • New approach within the team due to new people - rebuild trust in a short period of time
  • Time pressure to deliver - less time to go into detailed discussions on the forum
  • Time pressure to deliver - dev team has to work with less certainty as well, make decisions even though they aren’t 100% sure
  • Community doesn’t get answers in the same elaborate way as they are used to which makes people concerned.

These factors combined add up to an uncomfortable situation. Some people say: “relax and have faith”, some say: “show us the proof then I can relax”. And with less than 2 weeks to go one group is let’s see what happens and the other group is but we need to know for certain so let’s pause and dive in.

And in my personal opinion as long as there is a way to improve the network after the launch on the 29th, which seems to be the case, we can’t really go wrong, as long as we clearly communicate that at this moment it’s a network still in public beta.

I would love to read long technical answers to take away our concerns but understand if the focus right now is on making the deadline the 29th.

However, do realize that if you convince the critical people over here that they can and will be your strongest supporters, not just because they have faith but because they know. We were always a group of people that distanced itself from the crypto sphere to focus on the tech, the principles and by doing good, not for the money or the quick fame. That is part of the identity of this forum and moving away from it is causing friction, on an emotional and rational level.

13 Likes

Fortunately the initial launch on the 29th is a network that will be completely reset in late Jan '25. So we have plentyTM of time for changes to happen and network even reset a few times. The blockchain will keep a record of any testing earnings that happened. A block explorer will keep a tally much more efficient than an auditor Application has been.

I think one of the issues is the max chunk size because after launch of the permanent/persistent data network in late Jan the max chunk size cannot be reduced. It can be increased with some upgrade(s) but to reduce the size is really difficult since nodes are already storing the large chunks and node sizing is a combination of max chunk size times max records.

Also most comments I’ve read are to do with trying to assist the team by offering their expert knowledge and in my case experience gained through continuing education and hard knocks over 5 decades. Fundamental knowledge network engineering hasn’t taught for a long time and basically comms engineers are the ones taught and protocol writers learn. Instability concerns a lot of people here.

But yes the team structure and interface has changed and as you say in most part due to the urgency of ensure we have a stable network at Jan '25 stage of the rolling launch.

10 Likes

Is that true?! I hadn’t picked up on that. I thought the network to started on 29th October was - barring a catastrophe - supposed to be THE network that would carry on through whatever happens in January 2025.

9 Likes

I have neither link nor authority but AFAIK exactly that’s the case - temporary token and data won’t survive past end of January… I thought it was on docs.autonomi.com but somehow I didn’t see it now…

If it was the final network mood here would be different I guess - with a collapse 2 weeks before point of no return and without clear diagnosis/no proof the issue that made the network collapse is now history

8 Likes

Yes, the TGE requires a complete reset to go to the live autonomi token.

Many can think of the recent beta networks as the alpha launch and 29th the beta launch (rolling launch) and Jan '25 the next stage of the rolling launch. Its being called a rolling launch and the official terms are we are coming to end of beta and 29th is the 1st stage of the rolling launch, the 2nd stage of rolling launch is launching the live Autonomi token at end of Jan '25

11 Likes

My assumption was not correct. The behaviour I mention would inflict only node operator and not whole network.

1 Like

I would like to point out that the obvious violations of resource allocation; CPU, bandwidth, storage and RAM are probably not the most commonly occurring sources of shunning for the Autonomi-literate crowd. They can probably calculate the limits of those resources on their own setup adequately.

But, it seems to me, the most likely culprit of shunning, and the one hardest to estimate, is the Maximum Concurrent Connections limitation of one’s own router. For stock routers from ISP’s, I believe a common figure is about 30,000 MCC; for Microtik routers with 1GB RAM about 1,000,000. If you allow connection surges to approximate 1000 connections per node (worst case), that means on a normal home router, allowing some overhead for regular household applications usage, you should probably limit the number of nodes you’re running to about 25, no matter how many PC’s are being used for node-running. On a Microtik router you should be able to run about 800-900 max.

Edit: To be safe, you should probably allow about 2000 connections per node for catastrophic surge situations, making the max # of nodes on a dedicated Microtik router to be around 500.

3 Likes

Another is the maximum number of chunks being uploaded. At 4MB, cheap ISP routers can get close to running out of memory. You see the node will send the 4MB chunk at 1Gb/s to the router and the router buffers it while sending it out. If a few are doing this, like when large amounts of churn is happening and many nodes are sending multiple chunks to new nodes (new to network or become close) then the buffering of all these chunks will quickly fill a cheap ISP router. Cheap Mikrotik with say 64MB will fair better by a little and a Mikrotik with 1GB should do fine if not too many nodes.

Thus if someone is running 25 nodes as you suggest and large churn event happens, maybe during many joining the network, or 10% leave suddenly and many other nodes now become responsible for chunks your 25 nodes have, then just 1 chunk for all is asking the router to buffer 100MB which cheap routers cannot, so a lot of packets are dropped and the requesting nodes will strike those nodes whose packets were dropped.

For a Mikrotik with 1GB then even 250 nodes could cause the router to run out of memory in large churn event.

In a churn event a single node could be asked for 10 or 20 or 50 chunks, and if those requests came in from a few other nodes out there then the node in short time is sending those 10 or 20 or 50 or more chunks over the 1GB LAN to the router. Then multiply that by a few nodes and we see another result of limits being hit due to chunk size. 1/2MB chunks is 1/8 the required router buffer memory and is spread across many people’s nodes. @joshuef this is just another tip of wisdom for you from experience and falling into the problem before with other projects done in the past. Increasing data block size has many consequences and traps. This is just another

EDIT-Note: a 4MB chunk has around 2500 packets and the router is storing additional info to keep track of these packets while buffering them. Home routers have limits unlike VPS with full speed links to backbone of data centre and commercial routers with more than enough memory to buffer 1Gb/s in and 1Gb/s out.

10 Likes

A personal observation: During the Great Collapse of the last testnet, my 257 nodes, run locally from behind a 1GB Microtik, made it through fine, no shunning and thousands of nanos.

7 Likes

That is approx what I’d expect. Not all the nodes would be supplying chunks to others and those that are may not be doing it at the same time.

Also if you have a fast up link then the buffering is not so intense since the packets are flying out fast. IE a 1Gb/s LAN and >= 1Gb/s uplink then the buffer is little used. If it was a 40Mb/s uplink then the buffer would be fairly full or running out

7 Likes

Yes, forgot to mention: I’m on a synchronous gigabit internet connection.

8 Likes

Great to see continued progress. Maybe I’m getting old but I have trouble remembering the new name sometimes. It doesn’t roll off the tongue or seem to have any significance or meaning. And the logo reminds me of a dismembered eyeball. Just my 2 cents. I don’t really care as long as the progress of the network is in a positive direction.

6 Likes

Love it :slight_smile:

I am now officially a Friend of the Dismembered Eyeball.

7 Likes

Now I finally know what to call it. definitely making its way into my christmas greetings this year. Have you figured out the giffy rotating circles/moons/other eyeball on the autonomi home page? can’t figure out for the life of me wtf that is/supposed to be…

as for the devupdate, i say Kudos!! break this sucker as many different ways as you need to to get that data. (2MB is the magic though, just sayin.) but my reasoning comes from my perception of the heart of maidsafe, in particular the “everyone” part, and the spare resources part. the majority of the world i doubt has, as “spare” resources, enough juice to handle what we just threw at our nodes.

i know, especially with the impending launches, we need some big numbers of nodes, sooner than later. from the heart of the project, that would be tens and hundreds of thousands of you’s and me’s with our little Pis and Josh’s phone and a bunch of machines not used in robust development environments. i may be overstepping, but i suspect the value of several “people” with capacity for 10K+ nodes each, looks rather enticing to the network right now. would have to guess some of those would indeed be very interesting devs/teams, maybe some LLM wizards and the likes. it goes to reason we would a) happily invite them to help beta, and b) cater, at least visibly, to (some) of their preferences/needs.

i suspect there are at least a handful of players NOT in the discord channel, but the 2330 people that ARE couldn’t possibly be anywhere near enough to build the network. sooooo, we find ourselves in a bit of a quandry then regarding a network built on “farms” vs a network built from “spare resources.” The former is likely readily available (think: deadline) and the latter is a longer, slower process of continuing growth from devout Dimitars inviting everybody and their cousins. The former likely prefers the larger chunk, the latter will likely be limited in their ability/capacity to participate.

If the teams internal testing shows 4MB to be the magic #, and bigger-picture strategy agrees, then so be it. I think the only dissenting votes come from my router and crappy up speed and those who wish to run bazillions of nodes to (hopefully) bank a few coins during beta, and neo, who i think really, really wants it to run smoothly, for everyone. If bigger chunk=many people running fewer nodes and few “people” running farms…still MAID, still SAFE.

the bigger issue at hand i think is the ERC/crypto hurdle. that one changes the game in a major way, and betrays the SAFE part IMO. I understand it, and i tend to agree, it’s just gonna be a bit much for a good number of folks with no familiarity/sense of safety/desire to fool with crypto. it’s already been a helluva task to recruit, even with the beta “rewards.” how do you espouse this project in the form of an elevator pitch? unless you’re talking to a techie, you can’t. the devout, those with the savvy, and those with the 10K node capacity will carry the project through to the actual launch. after that though, we’re going to need a real slick pitch and butter-smooth IF to onboard the masses. a little slot to put my Visa/Maestro in would be nice.

F!!@#!orget about Calm,
and Keep on doing that thing that might mighty Ants do!!

12 Likes

Edit to remove because what’s the point? I shouldn’t even be spending time on this forum at this point. Just old habits from an old time.

I’m not going to go into the past to inspect or justify my tone etc, but want to correct the idea that I got what I wanted or anything out of that. What happened was only what Jim had said was planned all along - which was definately not what I wanted.

Notice that he didn’t respond to my offer (which you thought was grand, although something I was already doing, so not from my point of view). He has not said the dev forum will no longer be left to die, or that they are going to review in the light of the discussions and input from their own team. All he did was arrange to move it to a new IP with the continued statement that they don’t want it long term.

That discussion has as he kept pointing out been going on for a long time, and it always goes the same way, so yes I was pissed off about that.

I don’t believe I’ve treated anyone with less respect than I have been treated. I haven’t itemised or mentioned the slights which I was subjected too, and the pattern of the year. I’d say I have been more respectful. I also have not criticised the developers at all, in fact Jim is the only person I made specific criticism of and it was only to highlight the way the community is being treated using actual examples. Since Jim has been almost the only person interacting with us until recently I didn’t have any other concreted examples to use at the time. I said at the time it wasn’t meant to be about Jim. However, I continued to feel he does not respond to the points I’ve made while he says he believe he has.

I’m not so bothered now, only answering your criticism of my comments and ‘tone’.

Self-exile was a joke. I said I’d step back to review how I felt. I’m back but obviously not the way I was previously.

Yep, I sense things - not a robot - the Captcha’s agree. :rofl: I’m even emotional sometimes. Some call it passionate.

My presence now is for different reasons than before but I don’t care to elaborate as part of that is spending much less time on the forum, and to focus my time differently. Thanks for acknowledging the past, though that was to me for a very different project. Hence the change.

What I have said more than once is that I’m not here to sabotage. My commitment to the original goals remains so I will criticise or highlight issues which are visibly moving us away from them, or in my opinion are sabotaging them.

The reason I do that is because others here have not attempted to show where I’m wrong in what I’ve said, yet continue to ignore that and continue wanting everyone to stay ‘positive’ regardless.

Today’s forum quota: 66%. :wave:

9 Likes

For Pete’s sake dude, stop defending yourself. You’re already in the Hall.

4 Likes

Looking forward to Tuesday. May it bring better things. :smile:

Whichever way any individuals here feel, once we get over the growing pains and have some toys to play with (API’s), I’m guessing we’ll all be playing a lot nicer.

No hard feelings folks. Enjoy the rest of the weekend!

11 Likes

Forgive me for valuing clarity and so wanting to provide it. :man_shrugging:

5 Likes

Then I stand corrected and I’m kind of glad that is the situation! Clearly things are not ready.

Although, where is the incentive for me or anyone else or - arguably more importantly - the partners to upload their stuff only to have to upload it again later?

4 Likes