Its now coming to the end of the 32GB average node size test with approximately 36 hours to go.
Have over 100 nodes running (yes, many TB spare space) and up to now the number of records being stored is not high, actually very low. Across all the nodes the maximum stored records are less than 1200 in any node and the maximum records any node is responsible for is less than 500
The 2GB node tests had similar figures.
My question then is how are we testing 32GB nodes when the records being stored would not even get 2GB (real max size) nodes to 50% of max records. Barely 25% actually.
To my understanding this would be a failed test since it fails to test the one parameter it set out to test. Doesn’t even test 2GB nodes. Not sure who can answer this and educate me as to where I am wrong. @joshuef ?
All tests test for errors. The purpose for going to 32GB nodes was to test metrics with the larger max record.
But since the number of records in use is less than even the previous (longer) 2GB node tests then the 32GB feature does not seem to have been tested and thus testing for 32GB seems to have failed in my opinion.
So i am waiting to see if one or more of the dev team can educate me and others as to why its not considered a failure at testing the major reason for testing 32GB nodes
i assume to get a score on bandwidth usage / ram usage / cpu side effect of a filling network would be easier measured (without having all the other effects that come with a test network in the wild) with a maidsafe internal network that just consists of 100 nodes or so … pretty easy to fill - pretty easy to compare to another network with 8 / 16GB nodes
comparing this network of 170k nodes to the previous ones … you never know if somehow additional effects kicked in due to the size of the network
…but ofc I’m not someone from the team and I assume too something else was tested here - because that those nodes won’t fill easily would have been clear from the start =D …
Not to derail, but how can we have a decentralized web with chunk sizes that large? I suspect that no website on is going to be at all snappy when having to pull such large chunks. Or is this going to change again when we get native token?
Yea, all good on digital ocean with snappy 1 or 10 Gbit/sec connections and ultra fast interconnect within each datacentre and between their datacentres, Doesn’t mirror real life ISPs and under sea cables fighting with 100 other ISPs and distances further than US to west Europe.
Still @JimCollinson Your post did not answer the question of why do a public beta test of 32GB nodes if there was no plan to do the uploads to match it. Seems a wasted test to me.
Educate me as to how this beta test wasn’t a fail since uploads were never raised to match the size.
I understand you may not know the answer, but please I wish to understand, so ask others if needed please
So is this 64GB true sized ( 16384 max records) or 32GB true size (8192 max records)?
Chunk size is fixed - or at least it used to be … everything seems to be changing these days so who knows. It’s becoming a completely different network.
I’d like to know what will differentiate from filecoin and arweave now … it seems we are losing so many marketable features that we will have no value at launch … which begs the question - why do I need to be able to sell a worthless token? Why not just figure out a means to launching the real end product that was promised so our token will have real value in the end.
Yes, but images are getting larger and larger. Would think 12 MB and above sizes for images meaning 3 x 4MB chunks or 4 x 3MB chunks depending in exact size.
We are in a period where a lot of the world is still very limited in upload speeds, with majority of Australians with 20Mbit/sec or less upload speed and some with 40Mbit/sec
To get faster is very high priced connections and 2 year wait time.
Unlike the EU with their 500Mbit/sec and higher to large population areas
since chunks are encrypted again now client side … I think … chunks could appear larger again on disk I guess … but it’s probably a fast AES encryption which doesn’t change sizes … so there should be chunks with smaller sizes too
You are talking about supply side - I’m an referring to demand side. What makes us special for those who want to store/retrieve data?
We won’t have native fast transactions until? So that’s out for marketing now.
We won’t have a fast network or decentralized web. We won’t have NRS (or so I’ve heard), nor will we have search, so decentralized web is really broken at this point on many levels.
What’s unique for consumers here? Why do I need ?
If we are going to release a product after 18 years, it should leapfrog competitors or there is literally no point.
I am derailing now, so will not post further on the matter.