Largest to smallest for sorting purposes.
Here in the states it is mm/dd/yyyy lived here 11 years and not a day passes without me being annoyed by it.
from 2015- During the Zapchain AMA that is going on right now the following question came up:
“Hey Vitalik, I heard you say many times that the problem with blockchains
and scalability is that every node has to process every transaction.
MaidSafe has a consensus mechanism based on close groups around nodes,
which allows for scaling. Do you foresee any way something similar could
work for blockchains?”
his answer:
"Maidsafe’s approach is very similar fundamentally to the random-sample
paradigm I describe in my document:
However, I have not yet seen from Maidsafe a good answer for what happens when
one of these “close groups” fails (eg. 5 nodes disappear, 28 of 32 nodes
get bribed, etc). I would consider a system that doesn’t answer that
problem hopelessly fragile, and likely in practice to eventually break
and not be able to recover. Now if Maidsafe does come up with a good
answer to the problem (or if they already have), I will be quite glad to
change my opinion; though I suspect they’ll end up reinventing my
fallback schemes and challenge lottery mechanisms.
Is this solved? i mean is there an answer to vitalik question?
Using @Josh script, stats from node - had a flurry of activity today
------------------------------------------
Timestamp: Wed Sep 06 19:05:25 Europe 2023
Node: 12D3KooWE91hsqhuinZxZYwvpNj8SZwBqvnnmDGR4CD1GHuHtcB2
PID: 1551
Memory used: 69.3594MB
CPU usage: 2.7%
File descriptors: 895
Records: 169
Disk usage: 70MB
Rewards balance: 0.000000358
------------------------------------------
Jad
What he is saying is what happens if a network loses consensus? He is considering a group as the whole network in this analogy. So let’s stick to that
The first question you need to ask/answer is
Can a rouge node group create data → No!
Can a rouge group alter transactions → No! (in our design)
Can a rouge group re-order transactions No! (in our design)
So what attack specifically are we talking of, if it’s the network creating something or altering an existing thing then no, there is no such attack. This is fundamental. So what can a rouge group do. If blockchain thinking, they can rewrite history. Not here though, so perhaps the thought experiment is focussed on a blockchain world and in that world a rouge group is a dead network.
This is the key point, understanding the attack.
Now dive deeper, let’s look at data types
- Chunks - immutable and self validating, quantum proof encrypted.
- Register - mutable (append only) and signed by owner
- SNT transfer - immutable, signed by owner and forced to be unique (or it’s flagged as a doublespend attempt).
So what’s the actual attack here. Well it’s DOS really, the refusal to give data when asked. In the currency, it may be collude with owner to try and create a doublespend (give different transactions to different nodes). This attack breaks down quickly, though. But regardless.
So we have a network where the big attack is DOS, how can we prevent DOS? In reality, in the thought experiment can state we DOS every single node in the network, but that’s highly unlikely and also renders every network dead.
So what kind of DOS, say a group is Sybil attacked (we have defences for that, but regardless).
We have some backup here,
- DAG nodes (audit)
- Archive nodes (data)
Transactions can be written also to DAG nodes for extra security (and likely most will). Data will also be written to archive nodes.
So DOS A group does not break the network, it’s kind a useless thing to do. At most it’s a vandalism attack that any network can fall foul off. For instance what happens if you DOS every eth validator? What about only DOS the nodes you don’t control?
So it’s deeper now.
The introduces chaos of DAG and archive nodes, means there is a requirement for 1 of those to be honest, just 1. (same for close group, just one needs to be honest). The reason being the network or group cannot create anything, it cannot edit anything, all it can do is not give data when asked, but it has no ability to create valid data.
So back to the question of a rouge close group. If that group has no authority, then what can it actually achieve?
Away, I hope this gives a small bit of detail of the security of SAFE. It’s not about an honest network, it’s a data network powered by people, with peoples data for people. The network resembles a massive open to everyone hard disk. If some sectors go bad there is redundancy and there is a way to heal the bad bit (likely not for launch, but the solution to bad sectors is very simple)
I should add, the random sample thing was taken by Avalanche and extended (gossip to larger samples) and indeed shown to work. Algorand, slightly similar in that respect.
I’m going to form a rouge node group – working title The Facepowder Fannies
anyhow - an excellent rebuttal - I hope Vitalik gets to see it soon and responds.
Now that Algorand appears to have been knobbled, hobbled or otherwise destroyed, do you see any mileage in reaching out to Silvio Micali as a fresh and very authoritative set of eyes on the project.? Or indeed some of his team, who themselves can also muster an impressive set of braincells and may be at a loose end right now.
I am not sure really. All help welcomed but reaching out is hard and distracting in all honesty. I had not heard of Algorand’s issues? Do you have some info on that?
Ah I got it (perepexity.ai). It seems not the network but some wallet types that were an issue.
Yes, I know, you are too busy to reach out and it all takes away from progress NOW - I fully understand.
I heard a week or so ago that Algorand fell foul of the SEC and IIRC were basically shutting up shop. I may well be wrong in the detail but I’ll check and let you know. Seemed like a sad end to a very promising project that certainly enthused me and many others. Before it became obvious that Scotcoin was a total dead end with the existing management, I had wanted to base it off Algorand - cos I knew it would be some years until SAFE would get to where we are now.
From perpexity.ai
What is the Algorand Foundation’s response to the SEC lawsuit?
Quick Search
[
Algorand down 10% despite Foundation claims that it isn’t security - CryptoSlate
cryptoslate
·
1
](https://cryptoslate.com/algorand-down-10-despite-foundation-claims-that-it-isnt-security/)[
Algorand in new SEC suit: What should ALGO holders expect now - AMBCrypto
ambcrypto
·
2
](Algorand in new SEC suit: What should ALGO holders expect now - AMBCrypto)[
[PDF] Case 1:23-cv-01599 Document 1 Filed 06/05/23 Page 1 of 136 - SEC.gov
sec
·
3
](https://www.sec.gov/files/litigation/complaints/2023/comp-pr2023-101.pdf)[
Algorand is named as security in SEC lawsuit against Binance and Changpeng Zhao. Other projects that are mentioned are Cardano, Solana, Polygon and so on. - Reddit
reddit
·
4
](Reddit - Dive into anything)[
[PDF] Case 2:23-cv-00580 Document 1 Filed 04/17/23 Page 1 of 56 - SEC.gov
sec
·
5
](https://www.sec.gov/files/litigation/complaints/2023/comp-pr2023-78.pdf)[
SEC’s Gensler called ALGO ‘great technology’ in 2019: SEC now deems it a security - Cointelegraph
cointelegraph
·
6
](SEC's Gensler called ALGO 'great technology' in 2019: SEC now deems it a security)
Answer
According to the search results, the Algorand Foundation has refuted the SEC’s allegations that the ALGO token is a security
1
. The Foundation’s CEO, Staci Warden, has stated that they would not respond to other SEC allegations because they are not a party to the lawsuit
1
. However, if the SEC is successful in its lawsuit against Bittrex, Algorand Foundation may be required to pay penalties
2
.The situation has been complicated by the resurfacing of a video featuring SEC Chair Gary Gensler praising the Algorand blockchain platform in 2019
6
. Some in the industry have accused Gensler of being a former shill of Algorand and its founder, Silvio Micali
6
.Overall, the Algorand Foundation has refuted the SEC’s allegations, but the outcome of the lawsuit with Bittrex is uncertain, and it may take some time for the case to be resolved
1
2
.
Not Accurate
View Sources
Copy To Clipboard
Edit Query
Related
What are the specific allegations against Algorand Foundation in the SEC lawsuit
What is the potential impact of the SEC lawsuit on Algorand’s future
Has Algorand Foundation made any official statements regarding the SEC lawsuit
I was hoping to do my usual scripted uploading of random data for later verification (using a script I’ve not written yet… but anyway) but I’ve found the uploading of a folder containing 100 1MB files and a checksum file fails.
Running the upload manually gives just ‘killed’.
ls temp | wc -l
101
time safe files upload temp
Built with git version: 8faf662 / main / 8faf662
Instantiating a SAFE client...
🔗 Connected to the Network Preparing (chunking) files at 'temp'...
Making payment for 404 Chunks that belong to 101 file/s.
Killed
real 0m37.272s
user 0m32.123s
sys 0m6.758s
The cutoff point seems to be that 21 1MB files plus the checksum file works but 22 doesn’t.
I’ve noticed that when it fails I don’t get the
Transfers applied locally
If I get some more time later I’ll try to determine what the important factor is - number of files, chunks, cost, etc) but I feel I should report it just now.
But I’ll get the uploading going in batches of just 10 1MB files so we can all get some more payments!
Any estimates on how many nodes are still running to share in these payments? MAidsafe has 2k, another 5-600 from the community?
51 here.
Funny number but I always like to have one on it’s own on a AWS Instance for testing things and then however many I can sensibly put on another one of the type I like to use because it seems the most cost efficient - t4g.medium - on as many Instances as I can justify at the time. Which is only 1 at the moment.
Now that the RAM issue doesn’t seem to be as severe I think I’ll try 100 on one though. In the past I’d have killed the nodes and started new ones but I don’t want to lose my rewards!
Maybe I’ll look at transferring them. I know it’s toy money at the moment but it’s good to get into good practices.
I just have a t2.micro I am about to restart and try again. I had 50 nodes running last night but only got chunks on one. I didnt feel too bad about stopping it and losing 0.000000006 tokens…
This is exactly what we want to get close to, because soon it will be real money. As we get closer, I hope this becomes a more compelling feeling from all these amazing and unaffordable testers.
This is great in my opinion, 1 whole SNT is worth a hell of a lot in this scenario.
Local Timestamp: Wed Sep 06 19:45:07 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:07 UTC 2023
Number: 0
Node: 12D3KooWDmwehwNtj3eEKgBEaifVhwcxoLnPpmKriCNmcABc1C7c
PID: 2754
Status: running
Memory used: 89.5938MB
CPU usage: 3.3%
File descriptors: 1447
Records: 106
Disk usage: 40MB
Rewards balance: 0.000000192
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:08 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:08 UTC 2023
Number: 1
Node: 12D3KooWEsrvjVsZt3rneC4w1gyMMXHTGjCJ8UbZ5puqJzSWEAfq
PID: 2738
Status: running
Memory used: 96.8203MB
CPU usage: 3.6%
File descriptors: 1383
Records: 97
Disk usage: 37MB
Rewards balance: 0.000000150
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:08 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:08 UTC 2023
Number: 2
Node: 12D3KooWPXGWH5yVR4jmK1QPMGB6YzD1RKoo4t7j712KfkxKnAaP
PID: 2826
Status: running
Memory used: 82.0391MB
CPU usage: 2.9%
File descriptors: 1264
Records: 123
Disk usage: 51MB
Rewards balance: 0.000000226
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:09 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:09 UTC 2023
Number: 3
Node: 12D3KooWHsaEN3VZDdAmQhnxSAHTxKH7bkpxkYVLbNhgPPjKPHsU
PID: 2706
Status: running
Memory used: 82.5352MB
CPU usage: 3.2%
File descriptors: 1122
Records: 159
Disk usage: 62MB
Rewards balance: 0.000000200
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:09 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:09 UTC 2023
Number: 4
Node: 12D3KooWQxQN7vz9sRxSfLYLCASeAjozYWkCPEmAy3vcka4wM92J
PID: 2802
Status: running
Memory used: 75.0664MB
CPU usage: 2.6%
File descriptors: 1217
Records: 93
Disk usage: 36MB
Rewards balance: 0.000000160
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:09 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:09 UTC 2023
Number: 5
Node: 12D3KooWAu6pFForNFfHbc6Mhypjy5dJn9LoJVkXovgeB7d6Szd4
PID: 2810
Status: running
Memory used: 85.3008MB
CPU usage: 3.0%
File descriptors: 1321
Records: 78
Disk usage: 31MB
Rewards balance: 0.000000140
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:10 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:10 UTC 2023
Number: 6
Node: 12D3KooWShazzctfbEqAqzhbje6R3ev7e8SHYXjAKJcJt5YqS2RT
PID: 2818
Status: running
Memory used: 98.1875MB
CPU usage: 3.7%
File descriptors: 1411
Records: 94
Disk usage: 37MB
Rewards balance: 0.000000164
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:10 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:10 UTC 2023
Number: 7
Node: 12D3KooWPGiNsvt5GibB9QkYh4DVxoXHkMjTSwkCQnW6VTsNB7m1
PID: 2722
Status: running
Memory used: 84.4141MB
CPU usage: 3.1%
File descriptors: 1135
Records: 141
Disk usage: 55MB
Rewards balance: 0.000000302
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:10 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:10 UTC 2023
Number: 8
Node: 12D3KooWHU4378ztLGcmHqjUZfLM57J1D2F2MD2BiqgcyYWwTeJs
PID: 2770
Status: running
Memory used: 85.8242MB
CPU usage: 3.4%
File descriptors: 1301
Records: 144
Disk usage: 57MB
Rewards balance: 0.000000254
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:11 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:11 UTC 2023
Number: 9
Node: 12D3KooWSXSbQyPs9HP8BiW9jcRKkGRFhsMWuexgSsD97BciHRW7
PID: 2714
Status: running
Memory used: 87.3984MB
CPU usage: 3.2%
File descriptors: 1250
Records: 65
Disk usage: 28MB
Rewards balance: 0.000000110
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:11 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:11 UTC 2023
Number: 10
Node: 12D3KooWNGd8JW8NLyE5fLPXgqEvXnT5vm7WDRBG6W1XSTNpFTdY
PID: 2730
Status: running
Memory used: 76.6719MB
CPU usage: 2.6%
File descriptors: 1034
Records: 107
Disk usage: 39MB
Rewards balance: 0.000000158
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:11 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:11 UTC 2023
Number: 11
Node: 12D3KooWR5aBEASMyEQkMLufdrzv5TC3fQTRcK4YYFHeoWzxggkV
PID: 2762
Status: running
Memory used: 81.3281MB
CPU usage: 2.7%
File descriptors: 1244
Records: 100
Disk usage: 38MB
Rewards balance: 0.000000178
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:12 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:12 UTC 2023
Number: 12
Node: 12D3KooWKk8mQQZNM6W8Wahezp6oxdcYpBcNXZ3c5ZXj9v8o6z8T
PID: 2746
Status: running
Memory used: 102.602MB
CPU usage: 4.0%
File descriptors: 1604
Records: 198
Disk usage: 85MB
Rewards balance: 0.000000270
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:12 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:12 UTC 2023
Number: 13
Node: 12D3KooWQd9tkGMN5qjZ4J4gBJS3NDMJ8Qy9YdvEvNLfYAknK8wp
PID: 2778
Status: running
Memory used: 83.8867MB
CPU usage: 3.1%
File descriptors: 1327
Records: 174
Disk usage: 70MB
Rewards balance: 0.000000298
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:12 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:12 UTC 2023
Number: 14
Node: 12D3KooWJsrhB8L2bSj6rQ7SCCi23X3AEib7UFb8Vwa91wYwp1BP
PID: 2794
Status: running
Memory used: 76.2773MB
CPU usage: 2.7%
File descriptors: 1145
Records: 110
Disk usage: 43MB
Rewards balance: 0.000000212
------------------------------------------
Local Timestamp: Wed Sep 06 19:45:13 EDT 2023
Global (UTC) Timestamp: Wed Sep 06 23:45:13 UTC 2023
Number: 15
Node: 12D3KooWLRGcEefHnPRKC5PHdQRSfE3F1GAkzQpGmVB6sdeqnJ3f
PID: 2786
Status: running
Memory used: 86.6875MB
CPU usage: 3.1%
File descriptors: 1274
Records: 174
Disk usage: 69MB
Rewards balance: 0.000000310
some uploads if anyone wants to try retreiving them for added network exercising
ubuntu@RewardNetNodesouthside01:~/.local/share/safe/node$ time safe files upload -c 20 ~/uploads/MD-11/Engines/
Built with git version: 8faf662 / main / 8faf662
Instantiating a SAFE client...
🔗 Connected to the Network Preparing (chunking) files at '/home/ubuntu/uploads/MD-11/Engines/'...
Making payment for 26 Chunks that belong to 8 file/s.
Transfers applied locally
After 16.272996392s, All transfers made for total payment of Token(550) nano tokens.
Successfully made payment of 0.000000550 for 8 records. (At a cost per record of Token(550).)
Successfully stored wallet with cached payment proofs, and new balance 199.999997318.
Successfully paid for storage and generated the proofs. They can now be sent to the storage nodes when uploading paid chunks.
Preparing to store file 'pw4460_3.xml' of 3681 bytes (4 chunk/s)..
Preparing to store file 'tscp700-4e.xml' of 2498 bytes (1 chunk/s)..
Preparing to store file 'cf6-80c2d1f_1.xml' of 3687 bytes (4 chunk/s)..
Preparing to store file 'pw4460_1.xml' of 3681 bytes (4 chunk/s)..
Preparing to store file 'cf6-80c2d1f_3.xml' of 3687 bytes (4 chunk/s)..
Preparing to store file 'cf6-80c2d1f_2.xml' of 3687 bytes (4 chunk/s)..
Preparing to store file 'pw4460_2.xml' of 3681 bytes (4 chunk/s)..
Preparing to store file 'direct.xml' of 101 bytes (1 chunk/s)..
Starting to upload chunk #3 from "pw4460_3.xml". (after 0 seconds elapsed)
Starting to upload chunk #0 from "tscp700-4e.xml". (after 0 seconds elapsed)
Starting to upload chunk #0 from "pw4460_3.xml". (after 1 seconds elapsed)
Starting to upload chunk #1 from "pw4460_3.xml". (after 2 seconds elapsed)
Starting to upload chunk #3 from "cf6-80c2d1f_1.xml". (after 0 seconds elapsed)
Starting to upload chunk #2 from "pw4460_3.xml". (after 3 seconds elapsed)
Starting to upload chunk #3 from "pw4460_1.xml". (after 0 seconds elapsed)
Starting to upload chunk #3 from "cf6-80c2d1f_3.xml". (after 0 seconds elapsed)
Starting to upload chunk #3 from "cf6-80c2d1f_2.xml". (after 0 seconds elapsed)
Starting to upload chunk #3 from "pw4460_2.xml". (after 0 seconds elapsed)
Starting to upload chunk #0 from "direct.xml". (after 0 seconds elapsed)
Starting to upload chunk #0 from "cf6-80c2d1f_1.xml". (after 5 seconds elapsed)
Starting to upload chunk #1 from "cf6-80c2d1f_1.xml". (after 6 seconds elapsed)
Starting to upload chunk #2 from "cf6-80c2d1f_1.xml". (after 7 seconds elapsed)
Starting to upload chunk #0 from "pw4460_1.xml". (after 6 seconds elapsed)
Starting to upload chunk #1 from "pw4460_1.xml". (after 7 seconds elapsed)
Starting to upload chunk #2 from "pw4460_1.xml". (after 7 seconds elapsed)
Starting to upload chunk #0 from "cf6-80c2d1f_3.xml". (after 7 seconds elapsed)
Starting to upload chunk #1 from "cf6-80c2d1f_3.xml". (after 8 seconds elapsed)
Starting to upload chunk #2 from "cf6-80c2d1f_3.xml". (after 9 seconds elapsed)
Uploaded chunk #3 from "pw4460_3.xml" in 21 seconds)
Starting to upload chunk #0 from "cf6-80c2d1f_2.xml". (after 15 seconds elapsed)
Uploaded chunk #1 from "pw4460_3.xml" in 20 seconds)
Starting to upload chunk #1 from "cf6-80c2d1f_2.xml". (after 16 seconds elapsed)
Uploaded chunk #3 from "pw4460_1.xml" in 18 seconds)
Starting to upload chunk #2 from "cf6-80c2d1f_2.xml". (after 17 seconds elapsed)
Uploaded chunk #2 from "pw4460_3.xml" in 21 seconds)
Starting to upload chunk #0 from "pw4460_2.xml". (after 18 seconds elapsed)
Uploaded chunk #0 from "cf6-80c2d1f_1.xml" in 17 seconds)
Starting to upload chunk #1 from "pw4460_2.xml". (after 19 seconds elapsed)
Uploaded chunk #3 from "cf6-80c2d1f_2.xml" in 21 seconds)
Starting to upload chunk #2 from "pw4460_2.xml". (after 20 seconds elapsed)
Uploaded chunk #2 from "pw4460_1.xml" in 15 seconds)
Uploaded chunk #1 from "cf6-80c2d1f_3.xml" in 14 seconds)
Uploaded chunk #1 from "cf6-80c2d1f_1.xml" in 18 seconds)
Uploaded chunk #2 from "cf6-80c2d1f_1.xml" in 18 seconds)
Uploaded chunk #0 from "tscp700-4e.xml" in 27 seconds)
Uploaded "tscp700-4e.xml" in 27 seconds
Successfully stored 'tscp700-4e.xml' to 1ded0aab043e4310ff1c085aba8651def37840735ecaf73b48f875924c7db173
Uploaded chunk #2 from "cf6-80c2d1f_3.xml" in 13 seconds)
Uploaded chunk #3 from "pw4460_2.xml" in 21 seconds)
Uploaded chunk #3 from "cf6-80c2d1f_1.xml" in 25 seconds)
Uploaded "cf6-80c2d1f_1.xml" in 25 seconds
Successfully stored 'cf6-80c2d1f_1.xml' to 4d7ef133e97baf37d93f2877773c4fc37e139f43706fd28e5146675ad24e5918
Uploaded chunk #0 from "cf6-80c2d1f_3.xml" in 15 seconds)
Uploaded chunk #0 from "pw4460_3.xml" in 28 seconds)
Uploaded "pw4460_3.xml" in 29 seconds
Successfully stored 'pw4460_3.xml' to 1af7a2a039cd9df1fa7e7a338a51d0ce76361ddb00637b0c6a9d168cf4da415c
Uploaded chunk #0 from "pw4460_1.xml" in 18 seconds)
Uploaded chunk #0 from "direct.xml" in 21 seconds)
Uploaded "direct.xml" in 21 seconds
Successfully stored 'direct.xml' to f85ac7db29ea5afa5ed6db7aff5acb2cd9e6cd5fdd80d4730a4574a15553426a
Uploaded chunk #3 from "cf6-80c2d1f_3.xml" in 25 seconds)
Uploaded "cf6-80c2d1f_3.xml" in 25 seconds
Successfully stored 'cf6-80c2d1f_3.xml' to 8f987b1cd730d78f2b0c14c759c8be3688f1d8f6053982d280cfd24be5a221c0
Uploaded chunk #0 from "cf6-80c2d1f_2.xml" in 12 seconds)
Uploaded chunk #1 from "cf6-80c2d1f_2.xml" in 11 seconds)
Uploaded chunk #2 from "cf6-80c2d1f_2.xml" in 12 seconds)
Uploaded "cf6-80c2d1f_2.xml" in 29 seconds
Successfully stored 'cf6-80c2d1f_2.xml' to 919d5e144cc565ad9a5cf32f074501cf1744d35635f88a18ad32a6a389da065d
Uploaded chunk #0 from "pw4460_2.xml" in 12 seconds)
Uploaded chunk #1 from "pw4460_2.xml" in 11 seconds)
Uploaded chunk #2 from "pw4460_2.xml" in 11 seconds)
Uploaded "pw4460_2.xml" in 31 seconds
Successfully stored 'pw4460_2.xml' to d1f0740dfbd5aad0be81332f9d48b822565da447c0280c761e508ef46e04f485
Uploaded chunk #1 from "pw4460_1.xml" in 35 seconds)
Uploaded "pw4460_1.xml" in 42 seconds
Successfully stored 'pw4460_1.xml' to 51cd6655e68b4c1d69676be9702eed88276cc740ab715d1243e132981b82f2e0
Writing 439 bytes to "/home/ubuntu/.local/share/safe/client/uploaded_files/file_names_2023-09-06_23-54-03"
A larger directory failed
ubuntu@RewardNetNodesouthside01:~/.local/share/safe/node$ time safe files upload -c 20 ~/uploads/MD-11/Sounds/
Built with git version: 8faf662 / main / 8faf662
Instantiating a SAFE client...
🔗 Connected to the Network Preparing (chunking) files at '/home/ubuntu/uploads/MD-11/Sounds/'...
Making payment for 441 Chunks that belong to 110 file/s.
Error: Failed to send tokens due to Network Error Not enough store cost quotes returned from the network to ensure a valid fee is paid.
26 chunks good 441 bad
I think you have the right idea. To elaborate…
I would think that the node should provide three things:
- the price
- an expiration time indicating how long the price is valid for.
- a signature
Together, these represent a quote.
Then, so long as the node receives from client a correct payment with the corresponding (valid) quote within the time window, it should store any associated data. To clarify, the payment should be at least the quoted fee, but it could also be more, if client wishes to pay extra for some reason.
I think that accepting fees that are close but not quite what was quoted is a hack, and ultimately not correct behavior.
edit: I’m aware that safe network does not utilize time. Still, I think it would work to specify a deadline time according to the node’s clock. Worst case, the fee+data would not be accepted by that node… doesn’t seem so bad. And this encourages node operators to keep their clock’s correct. Alternatively, the interval could be based on a number of ‘ticks’ (operations) of the network clock, if such a concept exists.
more uploads but with a failure at the end
ubuntu@RewardNetNodesouthside01:~/.local/share/safe/node$ time safe files upload -c 20 ~/uploads/MD-11/Fonts/
Built with git version: 8faf662 / main / 8faf662
Instantiating a SAFE client...
🔗 Connected to the Network Preparing (chunking) files at '/home/ubuntu/uploads/MD-11/Fonts/'...
Making payment for 8 Chunks that belong to 2 file/s.
Transfers applied locally
After 18.268217213s, All transfers made for total payment of Token(148) nano tokens.
Successfully made payment of 0.000000148 for 2 records. (At a cost per record of Token(148).)
Successfully stored wallet with cached payment proofs, and new balance 199.999997170.
Successfully paid for storage and generated the proofs. They can now be sent to the storage nodes when uploading paid chunks.
Preparing to store file 'MCDULarge.ttf' of 7824 bytes (4 chunk/s)..
Preparing to store file 'Std7SegCustom.ttf' of 23264 bytes (4 chunk/s)..
Starting to upload chunk #3 from "MCDULarge.ttf". (after 0 seconds elapsed)
Starting to upload chunk #3 from "Std7SegCustom.ttf". (after 0 seconds elapsed)
Starting to upload chunk #0 from "MCDULarge.ttf". (after 1 seconds elapsed)
Starting to upload chunk #1 from "MCDULarge.ttf". (after 2 seconds elapsed)
Starting to upload chunk #2 from "MCDULarge.ttf". (after 2 seconds elapsed)
Starting to upload chunk #0 from "Std7SegCustom.ttf". (after 2 seconds elapsed)
Starting to upload chunk #1 from "Std7SegCustom.ttf". (after 3 seconds elapsed)
Starting to upload chunk #2 from "Std7SegCustom.ttf". (after 4 seconds elapsed)
Uploaded chunk #0 from "MCDULarge.ttf" in 13 seconds)
Uploaded chunk #2 from "Std7SegCustom.ttf" in 10 seconds)
Uploaded chunk #3 from "MCDULarge.ttf" in 15 seconds)
Uploaded chunk #0 from "Std7SegCustom.ttf" in 12 seconds)
Uploaded chunk #2 from "MCDULarge.ttf" in 12 seconds)
Uploaded chunk #1 from "MCDULarge.ttf" in 15 seconds)
Uploaded "MCDULarge.ttf" in 17 seconds
Successfully stored 'MCDULarge.ttf' to 8d281ac160fc4a1c814c3b7997b5aa62db0d719b15bc0afd0c1ecfd61e3a59fb
Uploaded chunk #3 from "Std7SegCustom.ttf" in 16 seconds)
Failed to store all chunks of file 'Std7SegCustom.ttf' to all nodes in the close group: Network Error Could not retrieve the record after storing it: 6de0c7cb36d725cb7851c067dd138f54e9e4145f8714bcf83277325b3ed3b386.
Writing 61 bytes to "/home/ubuntu/.local/share/safe/client/uploaded_files/file_names_2023-09-07_00-41-00"
real 3m49.049s
user 0m9.997s
sys 0m1.378s
nearly 4mins for 36kb…