yeah … i’m using the cli since i didn’t yet set up a python environment on the cloud machine and just wanted do a “fast test” … which ironically the official cli seems to be not the ideal tool for
but why would adding 6 byte to an archive take longer than a few ms …? (and why is the slow operation the default …? shouldn’t the default be fast and powerful … while the cheaper and slower one be hidden behind a flag…?)
but since i still only upload 3 chunks with cli (which uses the archive) this means it’s a data-struct that gets generated client side, then serialized to a bytestream … which then gets chunked and uploaded …
does the cli/API upload serialized RUST objects when uploading archives? or is it the bytes of a in-memory gzip or so?
No, even a single chunk public data goes into an archive using the cli. (I think) team should confirm, I am really just starting to wrap my head around this.
interesting how slow the price reacts when people don’t buy/sell in one go but slowly trade in 5-10k ANT volumes … and not the 50-100k in one go moves xD …
@dirvine how about simply dropping the archive function for now … stripping out not needed functionality and that’s not adding any functionality; just complexity (and some saving maybe … on large data blobs that can be compressed well)
true … we were told “a lot of stuff was changed - it’s silly to focus on the 4MB” … it never was revisited in any public test … the window size @neo suggested never was tested … and the network stopped working for me with introduction of the 4MB chunks … and since then i can participate at best via vps …
I am not sure we have a QUIC issue, if you look at the go impl, it went with a much higher max window than the rust one.
I am not sure here guys as it seems if some clients are working well in some configs then that is the change here between working and not working. I mean at the client, not at the node or chunk size or anything, but looks like the client somehow as that is outside the network and looking in. So if some are working and some not then I think we need to consider why.
There may be many more optimisations on the node, I think we all feel that, but this upload issue seems fragile at the client side.
Do you think something else would be capable of dropping my wifi? Honestly, I’ve never seen that before, which makes me think something isn’t right there.
Until I added a MikroTik router, hosting nodes would do similar things.
The router isn’t the worst either - the fritz box routers are relatively premium for ISP provided jobbies.
Edit: note that the wifi router isn’t doing NAT - the mikrotik is. The wifi router is literally only doing wifi.
No need. If you don’t want to use it you don’t have to. I was using my own structure for this but have switched to using Archive for compatibility - which as you can see from Southside’s upload means people don’t have to use dweb to upload a website that can be viewed in dweb.
Maybe the Python API doesn’t allow you to ignore the Archive yet, but it will be there. It has to be!