With the latest testnet available it’s a good time to look at the code and see how stuff is implemented under the hood.
This is intended to be an exploration so there’s probably a lot of opportunity to improve and add to this.
Reward Amount
The reward amount is scaled by the network size and the age of each node (source).
/// Calculates the reward for a node
/// when it has reached a certain age.
pub async fn reward(&self, age: Age) -> Money {
let prefix = self.network.our_prefix().await;
let prefix_len = prefix.bit_count();
RewardCalc::reward_from(age, prefix_len)
}
fn reward_from(age: Age, prefix_len: usize) -> Money {
let time = 2_u64.pow(age as u32);
let nanos = 1_000_000_000;
let network_size = 2_u64.pow(prefix_len as u32);
let steepness_reductor = prefix_len as u64 + 1;
Money::from_nano(time * nanos / network_size * steepness_reductor)
}
A minor subtlety - the visual style of the last line at a glance feels like there’s 2 terms in the numerator and 2 in the denominator, but it’s actually 3 in the numerator and 1 in the denominator. It’s clearer when expressed as time * nanos * steepness_reductor / network_size
(however this is not strictly the same expression due to the way integers interact, eg 3*4/5*6
is 12 and 3*4*6/5
is 14).
Since network_size
and steepness_reductor
both depend on the same variable prefix_len
it might be clearer as
+const NANOS = 1_000_000_000;
...
-let time = 2_u64.pow(age as u32);
-let nanos = 1_000_000_000;
-let network_size = 2_u64.pow(prefix_len as u32);
-let steepness_reductor = prefix_len as u64 + 1;
-Money::from_nano(time * nanos / network_size * steepness_reductor)
+let node_up_time = 2_u64.pow(age as u32);
+let network_size = 2_u64.pow(prefix_len as u32);
+let steepness_reductor = prefix_len as u64 + 1;
+let adjusted_network_size = network_size / steepness_reductor;
+Money::from_nano(NANOS * node_up_time / adjusted_network_size)
This gives us the following table for adjusted_network_size
:
prefix length | network size | steepness reductor | adjusted network size |
---|---|---|---|
0 | 1 | 1 | 1 |
1 | 2 | 2 | 1 |
2 | 4 | 3 | 1 |
3 | 8 | 4 | 2 |
4 | 16 | 5 | 3 |
5 | 32 | 6 | 5 |
6 | 64 | 7 | 9 |
7 | 128 | 8 | 16 |
8 | 256 | 9 | 28 |
9 | 512 | 10 | 51 |
10 | 1024 | 11 | 93 |
11 | 2048 | 12 | 170 |
12 | 4096 | 13 | 315 |
And the reward amount for various node ages and prefix lengths (values in milli-SNT) eg age 3 prefix length 7 gives reward amount 500 milli-SNT.
prefix length | node age 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|---|---|---|
0 | 1000 | 2000 | 4000 | 8000 | 16000 | 32000 | 64000 | 128000 |
1 | 1000 | 2000 | 4000 | 8000 | 16000 | 32000 | 64000 | 128000 |
2 | 750 | 1500 | 3000 | 6000 | 12000 | 24000 | 48000 | 96000 |
3 | 500 | 1000 | 2000 | 4000 | 8000 | 16000 | 32000 | 64000 |
4 | 312 | 625 | 1250 | 2500 | 5000 | 10000 | 20000 | 40000 |
5 | 187 | 375 | 750 | 1500 | 3000 | 6000 | 12000 | 24000 |
6 | 109 | 218 | 437 | 875 | 1750 | 3500 | 7000 | 14000 |
7 | 62 | 125 | 250 | 500 | 1000 | 2000 | 4000 | 8000 |
8 | 35 | 70 | 140 | 281 | 562 | 1125 | 2250 | 4500 |
9 | 19 | 39 | 78 | 156 | 312 | 625 | 1250 | 2500 |
10 | 10 | 21 | 42 | 85 | 171 | 343 | 687 | 1375 |
11 | 5 | 11 | 23 | 46 | 93 | 187 | 375 | 750 |
12 | 3 | 6 | 12 | 25 | 50 | 101 | 203 | 406 |
13 | 1 | 3 | 6 | 13 | 27 | 54 | 109 | 218 |
Reward Event
Rewards are accumulated in the Rewards data structure, and paid to the node wallets upon relocation. Payment is from the new section wallet which happens in activate_node_rewards
(source:
/// 3. The old section will send back the wallet id, which allows us
/// to activate it.
/// At this point, we payout a standard reward based on the node age,
/// which represents the work performed in its previous section.
async fn activate_node_rewards(
&mut self,
wallet: PublicKey,
node_id: XorName,
) -> Result<NodeMessagingDuty> {
...
// Send the reward counter to the new section.
// Once received over there, the new section
// will pay out the accumulated rewards to the wallet.
Storecost
Happens here
/// Get latest StoreCost for the given number of bytes.
/// Also check for Section storage capacity and report accordingly.
async fn get_store_cost(
&self,
bytes: u64,
msg_id: MessageId,
origin: Address,
) -> Result<NodeOperation> {
...
But is implemented as rate limit here
/// Calculates the rate limit of write operations,
/// as a cost to be paid for a certain number of bytes.
pub async fn from(&self, bytes: u64) -> Money {
let prefix = self.network.our_prefix().await;
let prefix_len = prefix.bit_count();
let section_supply_share = MAX_SUPPLY as f64 / 2_f64.powf(prefix_len as f64);
let full_nodes = self.capacity.full_nodes();
let all_nodes = self.network.our_adults().await.len() as u8;
...
let available_nodes = (all_nodes - full_nodes) as f64;
let supply_demand_factor = 0.001
+ (1_f64 / available_nodes).powf(8_f64)
+ (full_nodes as f64 / all_nodes as f64).powf(88_f64);
let data_size_factor = (bytes as f64 / MAX_CHUNK_SIZE as f64).powf(2_f64)
+ (bytes as f64 / MAX_CHUNK_SIZE as f64);
let steepness_reductor = prefix_len as f64 + 1_f64;
let token_source = steepness_reductor * section_supply_share.powf(0.5_f64);
let rate_limit = (token_source * data_size_factor * supply_demand_factor).round() as u64;
Money::from_nano(rate_limit)
A consideration - maybe using ceil
instead of round
would be more suitable? Less risk of floating point discrepancies this way. Or maybe using floor
since it benefits the uploader and is closer to an integer-style truncation? Something just feels very off about using both float
and round
in a money operation. Is there a way to calculate storecost reliably using integers? It’s critical for this to be foolproof across all implementations, languages, architectures etc.
The payment is handled in node/elder_duties/key_section/payment/mod.rs.
/// An Elder in a KeySection is responsible for
/// data payment, and will receive write
/// requests from clients.
/// At Payments, a local request to Transfers module
/// will clear the payment, and thereafter the node forwards
/// the actual write request to a DataSection,
/// which would be a section closest to the data
/// (where it is then handled by Elders with Metadata duties).
Section Split
The const RECOMMENDED_SECTION_SIZE
is currently set to 10.
The const ELDER_SIZE
is currently set to 5.
/// Recommended section size. sn_routing will keep adding nodes until the
/// section reaches this size.
/// More nodes might be added if requested by the upper layers.
/// This number also detemines when split happens - if both post-split
/// sections would have at least
/// this number of nodes.
pub const RECOMMENDED_SECTION_SIZE: usize = 10;
/// Number of elders per section.
pub const ELDER_SIZE: usize = 5;
This is used in try_split
to see if two valid sections can be created from splitting the current section (source).
// Tries to split our section.
// If we have enough mature nodes for both subsections, returns the elders
// infos of the two
// subsections. Otherwise returns `None`.
fn try_split(&self, our_name: &XorName) -> Option<(EldersInfo, EldersInfo)> {
...
There’s a helper structure SplitBarrier
in split_barrier.rs for ensuring splits go smoothly.
/// Helper structure to make sure that during splits, our and the sibling
/// sections are updated
/// consistently.
///
/// # Usage
///
/// Each mutation to be applied to our `Section` or `Network` must pass
/// through this barrier
/// first. Call the corresponding handler (`handle_our_section`,
/// `handle_their_key`) and then call
/// `take`. If it returns `Some` for our and/or sibling section, apply it
/// to the corresponding
/// state, otherwise do nothing.
Disallow Rule
Checks if more than 50% of nodes are full (source):
const MAX_NETWORK_STORAGE_RATIO: f64 = 0.5;
...
pub async fn check_network_storage(&self) -> bool {
info!("Checking network storage");
let all_nodes = self.network.our_adults().await.len() as f64;
let full_nodes = self.capacity.full_nodes() as f64;
let usage_ratio = full_nodes / all_nodes;
info!("Total number of adult nodes: {:?}", all_nodes);
info!("Number of Full adult nodes: {:?}", full_nodes);
info!("Section storage usage ratio: {:?}", usage_ratio);
usage_ratio > MAX_NETWORK_STORAGE_RATIO
}
Feels like this it might be safer and more efficient (slightly!) if it’s done with integers, eg this float expression
full_nodes / all_nodes > 1 / 2
has an equivalent integer expression of
full_nodes * 2 > all_nodes
Might also be worth renaming it to what the result is, eg exceeded_max_storage_ratio
.
To Do
I’m still finding the code snippets for:
Relocations
Token Transactions
Data Mutations
Age Increase
Anything else?
Versions
I’ll put versions and commit hashes here once the testnet is considered stable.
bls_dkg
bls_signature_aggregator
crdts
qp2p
resource_proof
self_encryption
sn_api
sn_client
sn_data_types
sn_node
sn_routing
sn_transfers
threshold_crypto
xor_name
crdts and threshold_crypto are not maidsafe repositories but they’re a big part of the network so I included them here.
sn_node is compiled using musl. Install instructions for musl on linux can be found in this post.
$ cargo build --release --target x86_64-unknown-linux-musl
sn_api does not compile with musl yet so use gcc (more info on the dev forum in sn_api and openssl dependency).