I skimmed through the Heapnet2 -thread and tried to collect the main observations, for folks not willing to go through all that. Feel free to add your own, I’m sure I have missed many things:
Problems with faucet were user errors. Using arrow up in terminal may give you wrong faucet address.
Nodes may stay empty for a long time and then start to get chunks.
Problems in downloading were likely mostly because of bugs in client, not loss of data. Mostly fixed now.
Uploads are quite slow and CPU intensive.
Downloads are extremely fast.
Nodes are very resilient to disturbances in connections.
Clients are very sensitive to disturbances in connections. Up/downloads break and don’t restart.
Wallet corruption was a typical reason for problems in uploading.
I didn’t see any reports. But because the net still goes on, it’s a bit premature to say if the slow memory increase will lead to that. (If it ever was the reason.)
I’m planning to make this a habit, to make some kind of summary after every testnet has run for a while. I just hate how all the nuggets get lost in the main thread. (Where would this World be without all the hate?)
Great work guys! @Toivo, @happybeing, @Josh, @storage_guy and everyone else I didn’t mention.
Is the API ready yet to be used for development? The testnets itself sound utterly promising and telling me to get started with building apps.
Edit:
I’m now working with Dask for Machine Learning / AI and one of the benefits is that can handle huge datasets spread over different files. Now that the download speeds are looking good and consistent it would be cool to see if I can already port the distributed dataset chunks and feed them towards Dask to perform Machine Learning on it. What I imagine is public register of open-source large datasets, they can either be downloaded to client or some sort of streaming to decentralized clusters. I really think these kind of use-cases are what makes Safe appealing, storage cost can be cheaper and data should be redundant if everything works out.
How could would it be if things like Event Horizon Telescope can publish their data on here and big data projects, AI, machine learning can all benefit from this.
I would say honestly, no. However, making them so is simple code wise, but we will need to press the community again to help us help them create a simple-to-use API, and it’s on us to iterate towards providing that as quickly as possible. We all now know we can be very quick, and we will be.