Update 21st November, 2024

fair bit of negativity round here but the the good news is files that go up stay up and are downloadable at good speeds.

here is some download results with time all check sums are good so I am feeling positive about how things are going even if I miss the the team coming into the forum to discus everything.

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "BegBlag.mp3"...
Successfully downloaded data at: e731c67a6be8c6abc052ed17338f540ab650e969d5a86571c1cb9821738a7956
real    0m1.478s
user    0m1.926s
sys     0m1.473s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "Patosh-RoughNight.mp3"...
Successfully downloaded data at: a1eeaa9f064c7491774ddc7c768e7d8f0b26b1d508d99b521dfe95eb447f0408
real    0m1.455s
user    0m1.585s
sys     0m1.206s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "An_Introduction_To_The_SafeNetwork.mp4"...
Successfully downloaded data at: 41eaab3ae781d1f394f135a374f97f26ac4deece5f220187560295741a6cb602
real    0m1.574s
user    0m1.822s
sys     0m1.545s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "autonomi.mp4"...
Successfully downloaded data at: 47429ce548856340a1a96fefb6c0becc24cbed7d57e477a94f425cec3aad00cc
real    0m1.169s
user    0m1.166s
sys     0m0.960s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "AnarchyInTheSouthside.mp3"...
Successfully downloaded data at: 78d772e35cc36964aa1ac37aac52de8b4678d150f29f90df5fac67843d52930a
real    0m5.130s
user    0m8.261s
sys     0m5.449s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "Best_CD_Ever.mp3"...
Successfully downloaded data at: 0fd628ef31ac942ba366a4c3fe6ac909f2762a956cede8ad9e041087cf7d8a21
real    0m4.728s
user    0m6.840s
sys     0m5.088s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "Deep_Feelings_Mix.mp3"...
Successfully downloaded data at: 3de033c03bcd3126e79847e2f5ec9ba5918b963236f6d18ffeebca7073e6d993
real    0m11.350s
user    0m19.257s
sys     0m12.800s

autonomi client built with git version: stable / 0205c20 / 2024-11-12
šŸ”— Connected to the Network                                                                                                                                                                            Fetching file: "linuxmint-22-cinnamon-64bit.iso"...
Successfully downloaded data at: 66262472deb06347604d33610b5a9a5cea6c0fd6b55c9ba98fff884026f1396d
real    3m11.130s
user    5m44.205s
sys     3m13.891s

for anyone who wants to give a download a go :slight_smile:

time autonomi file download e731c67a6be8c6abc052ed17338f540ab650e969d5a86571c1cb9821738a7956 . # BegBlag.mp3 md5sum b5cbbfb4fd311c0913972f885cada4e8
time autonomi file download a1eeaa9f064c7491774ddc7c768e7d8f0b26b1d508d99b521dfe95eb447f0408 . # Patosh-RoughNight.mp3 md5sum 452d1231d72489503ce73bc504b3da6e
time autonomi file download 41eaab3ae781d1f394f135a374f97f26ac4deece5f220187560295741a6cb602 . # An_Introduction_To_The_SafeNetwork.mp4 md5sum 35a9d8e9a28be554e00fb18c4d824913
time autonomi file download 47429ce548856340a1a96fefb6c0becc24cbed7d57e477a94f425cec3aad00cc . # autonomi.mp4 md5sum 9c7af792fa736307402e5ca2c33494c2
time autonomi file download 78d772e35cc36964aa1ac37aac52de8b4678d150f29f90df5fac67843d52930a . # AnarchyInTheSouthside.mp3 md5sum 6d14a6fb2ea801521cc4afccbd20d26a
time autonomi file download 0fd628ef31ac942ba366a4c3fe6ac909f2762a956cede8ad9e041087cf7d8a21 . # Best_CD_Ever.mp3 md5sum 4c643c4961173459cfec8d629edf2ac9
time autonomi file download 3de033c03bcd3126e79847e2f5ec9ba5918b963236f6d18ffeebca7073e6d993 . # Deep_Feelings_Mix.mp3 md5sum b965900758377e1c52198bae42344418
time autonomi file download 66262472deb06347604d33610b5a9a5cea6c0fd6b55c9ba98fff884026f1396d . # linuxmint-22-cinnamon-64bit.iso md5sum fb2701694cccc6035d4965ac1d55d7e0
21 Likes

Good test, how about upload ?

2 Likes

what about uploads ? they cost nanos and could be faster but thatā€™s improving something thats working.

large uploads fail a lot of the time with being unable to get quotes and also the failed verification check. but I am guessing thatā€™s client side problems not node or network side problems.

so i think we are getting there one step at a time :slight_smile:

7 Likes

I uploaded an archive with my test website and scripts just fine. The CLI tool is a bit basic, but does the job.

Sometimes you get a timeout and the retry can cost attos too, but it works. Once the files are up, they stay up (from all my impromptu testing during sn_httpd dev).

In fact, sn_httpd now supports archives properly (as of last night) and the file map is no longer needed as a result - it just uses the archive to lookups.

It needs a bit more tlc until I share it, but itā€™s coming along nicely. Caching the archive between calls to it will speed things up nicely too (next on list!).

7 Likes

Thanks very helpful :slight_smile:

3 Likes

Thanks for your kindness :blush:

4 Likes

great update - thanks!

I havent done any testing myself but having a stable network that doesnt fall over seems like huge progress to me.

reading through the posts here I do understand some of the concerns and itā€™s great to have a critical community that gives honest feedback. But when in doubt i would suggest to cut the team some slack.

maybe iā€™m being naive but when i perceive a lack of communication then i think to myself: probably they are busy launching the next internetā€¦
when i read about autonomi labs i thought: great, another entity that will develop on the network (rather than their intention being to defraud investors etc.)
when timelines are being missed I remind myself that now at least there are timelines :slight_smile: I mean we are coming from a place where we avoided giving timelines at all because they would always be missed by years (decades? :wink: ).

I understand many of us are heavily invested, told friends and family to invest etc and therefore are under pressure. Thats is the case with me as well.

But given that we are really launching after all this time should make us feel :star_struck: :boom: :heart:

21 Likes

Agree with everything you say @BambooGarden, I think the main concern for those of us that are giving constructive criticism is what are we going to launch? I honestly believe most people giving the criticism have the networks best interest in mind and not their own pockets.

Edit: A simple message in the weekly update would suffice ie: we are still working on the api, white paper, dave and will give an update when we have one. To not mention any of them for a week or 2 when it was said they will be ready a few weeks ago is something that needs to be acknowledged.

10 Likes

The phrase that springs to mind - you cut the cloth to fit the table.

We have a launch date and I suspect the team will try to launch with as many features as possible, via remaining stable.

For me, itā€™s all about the core. Nodes and payments/rewards.

API can wait, as annoying as it may be for app devs. CLI same. UI same.

I know code freeze has been mentioned, but Iā€™m going to suggest it was be most chilly in the node software. Outside of that, I would expect continued development up to and beyond launch.

Just imo though.

4 Likes

Just to clarify!

When we are talking about a code freeze, this it just about the updates to the main branch for the 7 days preceding a release. So for example from the 10th to the 17th of December, weā€™re locking in the code changes on main so the release can proceed smoothly.

Development will still continue (a pace!) but we just need that clear space for the deployment and release to proceed in an orderly fashion.

And just to re-stateā€¦

We have checkpoint for two major updates which could and likely will include breaking changes: the 17th of December and the 21st January.

There can still be updates and changes outside and alongside these, but they would be in the form of network upgrades, which would be backwards compatibleā€¦ same as for updates after the 21st January.

Hope that clears things up!

20 Likes

Hey John, thanks for that info! Someday for sure. I thinks itā€™s a long road but definitely, especially because Iā€™m wanting to use biometric data in a novel way to influence a users particular playlist.

Iā€™ll take note of this

3 Likes

Have a wee snippet

if BP >= 160/110;
play https://www.youtube.com/watch?v=Z9XnpyymntM
else
play https://www.youtube.com/watch?v=9O3pACq13fs

1 Like

I want one of these! Lol

1 Like

Itā€™s like a scene out of the Dystopian Black Comedy, Brasil. :slight_smile:

Love the cooling innovation. :slight_smile:

2 Likes

to be clear, from my point of view this is not at all about being negative, :wink:

its about ensuring a successful launch that will handle the onslaught of nodes being added and likely being removed then added again because of misconfigurations,

to ensure the overall network stays performant,

despite newbie safenode fleet operator user configuration guffaws on the system they decide to employ,

guffaws which will happenā€¦

(most everyone does not read the manual thoroughly, nor do they comb through the mountains of advice scattered throughout the forum and discord)

we all caught a glimpse of that lost of thousands of safenodes guffaw @ scale last week when several thousand nodes went off lineā€¦

ok the network self healed,

but its definitely not a good thing,

and we should not be naive to think about competing projects not wanting to sabotage in such a manner, :confused:

The Big challenge?

how does Maidsafe/ Autonomi prevent such

ā€™ a big bag of lemmings eventā€™

where the performance jumps of cliff and into the loo,

dangled on a very long bungy chord bouncing around awhile ,

almost drowning

with slow re-coil time,

to get back to normal

when a lot of nodes depart the network abruptly ?

For many reasons,

Autonomiā€™s launch success

will be measured as # of new newbie onboardings running their own system fleets of safenodes

and having a great experience out of the gate,

Having the network grind almost to a halt because of a mass of Newbie misconfigurations

Itā€™s not a good lookā€¦ and will scuttle the launch

IMO, Autonomi is going to get ONE shot at this

to onboard a lot of nodes at Launch with a Marketing push

and as such,

must be resistant to such shenanigans ,

planned or unplannedā€¦

Hence

a Community Test Network is the best place to simulate such

non-nefarious unplanned big loss of nodes events

as given the current state of documentation for system configuration

One of the Primary use cases

" install some new nodes on any old PC "

Is going to result in

a lot of mis-configured Newbie Fleets,

which is going to happen.

Part of the challenge imo

is Maidsafe and the Community of Autonomi

Tightening up the Marketing on this by pointing the user to

clear concise documentation on how a system operator is to run a fleet of nodes so as to set up their system up for success and optimal reward earnings.

That means giving Newbie safenode system fleet operators

reasonably accurate configuration advice for the system they want to deployā€¦

So their configuration just works and they start earning fairly quickly

And their User Experience is joyful with a huge likely to Recommend factor being the result!

of course the flip side is ā€˜Daveā€™ client uploads , ditto for that.

Th current SOTU for Autonomi is NOT serving the Newbie with good configuration advice and

is THE ā€˜biggest holeā€™ right now in the documentation,

A HOLE WHICH CAN FILLED ,

backed up with proper ā€˜black boxā€™ testing for different system types, OSes and related assigned, compute, memory, ISP connection, local network and storage resourcesā€¦

to largely avoid the Newbie Misconfiguration events which will only result in a bad system fleet of nodes operator experience.

For sure no one wants that bad user experience

ā€˜word of mouthā€™ news floating around,

so letā€™s work together Maidsafe and the Community

to ā€˜cut it off at the passā€™ to improve the node operator system configuration per systems type

and

do the black box testing on that configuration documentation

to ensure the Newbies build a quality safenode fleet operator system configuration

based on Quality documentation advice

to plug that big gaping hole.

Now for the DARK SIDE of big loss of nodes

On the planned nefarious version of ā€˜giant loss of nodesā€™ event ,

that requires Maidsafe and the Community to come up with a

ā€˜resilienceā€™ methodā€¦

built in to the way , possibly

to have

New connections made to new peers to re-expand the close group

and/or

have close groups merge in certain instances

chunks are replicated and transferred

without spending a lot of time shunning nodes

etc

such that the network recovers from such a nefarious mass shutdown of node events

is recovered from in a non panic, graceful degradation manner by each safenode

Not easy stuff to do for sureā€¦ but doable none the less,

now that the mass loss of nodes network behaviour has been revealed to cause some serious

increases in system resource use CPU levels way up

which for sure creates a doppler drop in performance

and an on system contagion effect

badly affecting the performance of other local safenodes

running on the same system

which in turn affects the performance of their connected peers in the same close group

which leads to a stop in reward earnings

not goodā€¦

but I am sure after following the Maidsafe Team and the Community on and off since 2017

we are collective now smart enough to proactively

add some value here to innovate

so this type of nefarious ā€œloss of Lots of nodesā€ action is

addressed with graceful recovery

logged to id the culprits

and reduced to a minor hiccup to restore operator and user confidence quickly

transparently reported ASAP to all node operators, via the node launchpad (add a messaging system)

The latter needs safenode operator permissioned logging to be turned on

per affected safenode

so as to o report what is actually going on in real time to Maidsafeā€¦

hence the earlier suggestion and reference made to Solace and employing Event pubsub brokers to catch all that permissioned logging traffic and make some sense out of it quickly so as to message the operators quickly.

My 2 centsā€¦

4 Likes

I hear where you are coming from, but Iā€™m sure maidsafe would reach out for our help if they needed it.

Without their input, we donā€™t know what they have or havenā€™t done, nor whether they are struggling or not. We just have speculation that there is a big block hole on the testing front.

The truth is, we donā€™t know the status. Nor do Maidsafe seem particularly panicked by it. For me, this suggests either there isnā€™t a problem or Maidsafe are incompetent. I canā€™t believe it is the latter - considering the complexity of where we have arrived at - which makes me feel everything is going mostly to plan.

If Maidsafe want or need our help, Iā€™m sure we will hear the call to arms.

But sure, if folks want to spend a lot of time and effort doing community testing, why not? Just donā€™t feel put out if that effort feels wasted or unappreciated.

7 Likes

This is exactly the time imo to start rapidly transitioning part of the responsibility for network black box testing to the Community.

That requires a Maidsafe commitment to funding an ongoing Maidsafe and Community collaborative Cloned Production Reference Platform to support Application Development and Testing of new features, application of bugfixes and validation the fixes work, and for replication of problems in the production network.

Its a proposed forward looking partnership aimed at creating a sustainable quality experience for the node operators, users, developers, which also needs to be linked to governance in managing change request and new feature request priority and scheduling. :wink:

3 Likes

A statement one way or another on this will bring clarity.
And hopefully stop a lot of us OGs worrying unnecessarily

If we are assured - with evidence - that its all in hand, then there is no need for anyone to feel put out.

1 Like

I have a question to the dev team, have you guys considered this question?

3 Likes

This will be solved simply when they set the QUIC window size to something reasonable rather than the default data centre figure of 10MB

Once it is at 128KB then that will be a lot less than the 512KB or 1MB they say not to exceed. The 4MB chunk will be split up into data blocks that are the size of the QUIC window size and each sent and waits for ROK from the other end before sending the next. This also helps reduce 1000% the issues with ISP supplied routers and router buffers being relatively small in general. UDP was meant as a protocol that can survive dropped packets, but Autonomi cannot, and TCP has built in data window limits of 1 packet or 7 packets (7 from memory or is it 10)

13 Likes