[Offline] Fleming Testnet v6.1 Release - Node Support

I see strange duplications in the log.
Maybe it should be like this, do not know:

Two log fragments
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.552831800+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(d79bd2(11010111)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:07:37.555831800+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id c8568e0d..: NodeQuery { query: Chunks { query: Get(Public(c2beaf(11000010)..)), origin: EndUser { xorname: c95bbf(11001001).., socket_id: 495bbf(01001001).. } }, id: c8568e0d.. }
[sn_node] DEBUG 2021-06-25T11:07:37.555831800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.555831800+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(e2394f(11100010)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:07:37.556831800+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id c8568e0d..: NodeQuery { query: Chunks { query: Get(Public(c2beaf(11000010)..)), origin: EndUser { xorname: c95bbf(11001001).., socket_id: 495bbf(01001001).. } }, id: c8568e0d.. }
[sn_node] DEBUG 2021-06-25T11:07:37.556831800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.556831800+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(db272e(11011011)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:07:37.557831800+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id c8568e0d..: NodeQuery { query: Chunks { query: Get(Public(c2beaf(11000010)..)), origin: EndUser { xorname: c95bbf(11001001).., socket_id: 495bbf(01001001).. } }, id: c8568e0d.. }
[sn_node] DEBUG 2021-06-25T11:07:37.557831800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:153] Used space: 149158642
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00298317284
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:153] Used space: 149158642
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00298317284
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:153] Used space: 149158642
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:07:37.559831800+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00298317284
[sn_node] DEBUG 2021-06-25T11:07:37.559831800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk c2beaf(11000010)..)))), id: e43baed7.., correlation_id: c8568e0d.. }), dst: Section(c2beaf(11000010)..), section_source: false, aggregation: None } ]
[sn_node] DEBUG 2021-06-25T11:07:37.559831800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk c2beaf(11000010)..)))), id: e43baed7.., correlation_id: c8568e0d.. }), dst: Section(c2beaf(11000010)..), section_source: false, aggregation: None } ]
[sn_node] DEBUG 2021-06-25T11:07:37.559831800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk c2beaf(11000010)..)))), id: e43baed7.., correlation_id: c8568e0d.. }), dst: Section(c2beaf(11000010)..), section_source: false, aggregation: None } ] 
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.958326600+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(db272e(11011011)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:31:38.958326600+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id c645531c..: NodeQuery { query: Chunks { query: Get(Public(c0189e(11000000)..)), origin: EndUser { xorname: ea74c0(11101010).., socket_id: 6a74c0(01101010).. } }, id: c645531c.. }
[sn_node] DEBUG 2021-06-25T11:31:38.958326600+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.959326800+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(db272e(11011011)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:31:38.960327+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id 76dcd5a4..: NodeQuery { query: Chunks { query: Get(Public(dd9e0b(11011101)..)), origin: EndUser { xorname: ea74c0(11101010).., socket_id: 6a74c0(01101010).. } }, id: 76dcd5a4.. }
[sn_node] DEBUG 2021-06-25T11:31:38.960327+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.960327+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(fd2b98(11111101)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:31:38.961327200+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id 76dcd5a4..: NodeQuery { query: Chunks { query: Get(Public(dd9e0b(11011101)..)), origin: EndUser { xorname: ea74c0(11101010).., socket_id: 6a74c0(01101010).. } }, id: 76dcd5a4.. }
[sn_node] DEBUG 2021-06-25T11:31:38.961327200+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.961327200+03:00 [src\node\event_mapping\mod.rs:43] Handling RoutingEvent: MessageReceived { content: "00a500..", src: Node(e2394f(11100010)..), dst: Node(c8384f(11001000)..) }
[tokio-runtime-worker] DEBUG 2021-06-25T11:31:38.961327200+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id c645531c..: NodeQuery { query: Chunks { query: Get(Public(c0189e(11000000)..)), origin: EndUser { xorname: ea74c0(11101010).., socket_id: 6a74c0(01101010).. } }, id: c645531c.. }
[sn_node] DEBUG 2021-06-25T11:31:38.961327200+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:153] Used space: 152304454
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:153] Used space: 152304454
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00304608908
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00304608908
[sn_node] DEBUG 2021-06-25T11:31:38.966328200+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk c0189e(11000000)..)))), id: a8df1e2e.., correlation_id: c645531c.. }), dst: Section(c0189e(11000000)..), section_source: false, aggregation: None } ]
[sn_node] DEBUG 2021-06-25T11:31:38.966328200+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk c0189e(11000000)..)))), id: a8df1e2e.., correlation_id: c645531c.. }), dst: Section(c0189e(11000000)..), section_source: false, aggregation: None } ]
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:153] Used space: 152304454
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.966328200+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00304608908
[sn_node] DEBUG 2021-06-25T11:31:38.967328400+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk dd9e0b(11011101)..)))), id: b2428aa1.., correlation_id: 76dcd5a4.. }), dst: Section(dd9e0b(11011101)..), section_source: false, aggregation: None } ]
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.967328400+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.967328400+03:00 [src\node\data_store\mod.rs:153] Used space: 152304454
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.967328400+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T11:31:38.967328400+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00304608908
[sn_node] DEBUG 2021-06-25T11:31:38.967328400+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Ok(Public(PublicChunk dd9e0b(11011101)..)))), id: b2428aa1.., correlation_id: 76dcd5a4.. }), dst: Section(dd9e0b(11011101)..), section_source: false, aggregation: None } ]

So same request arrives via several nodes.
And then several responses are sent.

I doubt that requester needs 3 copies of the same chunk from the same node.

1 Like

So what guys. Another 10 testnets?

I doubt that, but you can never tell. It seems @lionel.faber may have found this bug, but testing now.

3 Likes

Ok issue seems to be us doing too much and trying to return data errors to help clients. That extra work has a bug and we are of the opinion we should not do that work. So possible fix incoming. 6.2 I doubt but perhaps?

9 Likes

3 / 7 elders are queried for the data so what you are seeing is expected. To prevent a single point of failure we query more than 1 Elder and so your Adult Node will get 3 queries for the same piece of data.

10 Likes

There’s nothing wrong with another 10 testnets. The devs appear to be learning and improving each time and the community is providing good feedback. Keep on it

10 Likes

At 13:25 my PC shut down. Probably because of power failure.
I decided to try join as node again.

There was a chance of rejoining with the same id, so I did not removed any files.
Same id join failed, but then strange things happened.

At 15:20 node joined as 910cf0.
And looks like it started to upload all stored chunks:

...
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.275519800+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Private(deae08(11011110)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.310519800+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Private(debfd7(11011110)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.342523600+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(8662f8(10000110)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.351525400+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(86e107(10000110)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.351525400+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(87c840(10000111)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.361527400+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(9bec28(10011011)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.369529+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(c0189e(11000000)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.433541800+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(c022dc(11000000)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.486552400+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(c02512(11000000)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.533561800+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(c02a35(11000000)..)
[tokio-runtime-worker] INFO 2021-06-25T15:20:50.575570200+03:00 [src\node\node_api\role\adult_role.rs:82] Republishing chunk at Public(c04e0d(11000000)..)
...

It even decided to send chunk to itself!
[sn_node] DEBUG 2021-06-25T15:21:08.501519800+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: SendToNodes [ msg: NodeCmd { cmd: System(ReplicateChunk(Public(PublicChunk 8662f8(10000110)..))), id: 5c623bff.. }, targets: {8cac07(10001100).., 910cf0(10010001).., 92d626(10010010).., 954a58(10010101)..}, aggregation: None ]
I thought it will delete it and recreate, but no:
[tokio-runtime-worker] INFO 2021-06-25T15:21:17.885032+03:00 [src\node\chunks\chunk_storage.rs:90] ChunkStorage: Immutable chunk already exists, not storing: Public(8662f8(10000110)..)
Shortly after this, at 15:22, connectivity problem started to appear:

[tokio-runtime-worker] WARN 2021-06-25T15:22:08.552719600+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\qp2p-0.12.4\src\connections.rs:221] Failed to read incoming message on uni-stream for peer 178.128.172.1:54636 with error: TimedOut
[tokio-runtime-worker] WARN 2021-06-25T15:22:08.552719600+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\qp2p-0.12.4\src\connections.rs:261] Failed to read incoming message on bi-stream for peer 178.128.172.1:54636 with error: TimedOut
[tokio-runtime-worker] DEBUG 2021-06-25T15:22:08.552719600+03:00 [src\routing\core\connectivity.rs:23] Possible connection loss detected with known peer Peer { name: db272e(11011011).., addr: 178.128.172.1:54636, reachable: true } 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:08.646919400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer db272e.. at 178.128.172.1:54636 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:08.647919400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer db272e.. at 178.128.172.1:54636 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:08.648919400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer db272e.. at 178.128.172.1:54636 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:08.648919400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer db272e.. at 178.128.172.1:54636 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:08.648919400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer db272e.. at 178.128.172.1:54636 

Despite such problems, node were still functioning (15:23):
[sn_node] DEBUG 2021-06-25T15:23:11.686119400+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReplicateChunk
But then more Lost known peer error started to appear:

[tokio-runtime-worker] DEBUG 2021-06-25T15:23:23.220719400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer d79bd2.. at 178.128.169.33:46106 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:23.221719400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer e2394f.. at 178.128.169.227:54173 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:23.221719400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer e2394f.. at 178.128.169.227:54173 
[tokio-runtime-worker] DEBUG 2021-06-25T15:23:23.221719400+03:00 [src\routing\core\connectivity.rs:40] Lost known peer db272e.. at 178.128.172.1:54636 

And after ~1 megabyte of such messages, no messages more appears now.

I suspect that these errors may be not related to connectivity problems, but network reacted wrongly to my node activity. Other software was working without a problems at that time.

2 Likes

I was able to store a 20MB file in 37s yesterday (although I can’t retrieve it today). That’s not too bad @ around 5Mbps. Assuming cutting out the piecemeal payments help cut down on the messages and speed up uploads by some amount, it is getting close to sufficiently usable, speedwise.

x@sto:~$ time safe files put TinyCore-12.0.iso

FilesContainer created at: "safe://hyryyrbjgg9os8gwe8jtwxrcspiwt437geb3toxsxnbxrkmx7b9574sfsbynra"
+  TinyCore-12.0.iso  safe://hyfeynyeh7bp7ghew4xysnkxh9pya9ndixqpa4njbqopsyamuuru6sxi8ka

real    0m37.922s
user    0m9.851s
sys     0m1.658s
8 Likes

@davidpbrown thx, I created a github issue for this.

https://github.com/maidsafe/sn_node/issues/1598

8 Likes

Somewhat in relation. On two occasions when starting a --first node in DO it would start logging (in the terminal) at such pace that I am not able to read, they are not saved.
It only stopped once I kill the node.

2 Likes

the next time you try append

| tee DO_sn_node.log to your launch command. Then you can inspect DO_sn_node.log at your leisure. You can call your log file anything you like, of course. The ‘|’ symbol can be found (on a UK kybd) above the backslash to the left of ‘z’

The | is known as the pipe symbol as it ‘pipes’ the output of one command to the input of another.

2 Likes

Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers, Developers…

What are the minimum computer specifications required to install a node?

2 Likes

We keep that low, so raspi is enough but any win/osx/linux box should be fine.

3 Likes

Is there any update ? :eyes:

1 Like

That’s 6.1 taken offline just now.

4 Likes

The latest testnet is offline now but you can run your own local network. Shout if you need help with this. It will let you gain familiarity with the concepts and commands.

safe node run-baby-fleming

will get you started. Use the help flags for each command and read https://github.com/maidsafe/sn_cli#readme for starters

7 Likes

That is, the public test is not yet launched! Is there already an understanding or plans for the terms of the roadmap, when will it be? Or is MaidSafe an open source project, something like Gitecoin?

2 Likes

It’s more like there are regular public testnets at the moment. It’s version 6.1 there in the title, and version 1 was roughly two and a half months ago. So they’re coming thick and fast, basically.

Had a very quick look at gitcoin - one big difference to get you started would be that the Safe Network doesn’t have a blockchain at all.

5 Likes

It is 100% open source. This project has learned never to give target dates but the roadmap is pretty much agreed. I will let others point you to the latest version of the roadmap – which may well be out of date.
This is a very “different” project - the entire team - except for one admin heroine - are engineers/coders, so quite often the documentation is not 100% up to date. Staying on top of this forum is the best way to get the current state of play.
AIUI we can expect a testnet 7 in 4-6 weeks approx as the lessons learned from the recently completed Testnets 6.x are absorbed, bugs identified and crushed and planned enhancements put in place. It may be the case that the community will attempt to launch its own testnet during this time but there have AFAIAA no discussions on that as yet. Its early days… Certainly there have been attempts to do this in the past. This is not out of any frustration with the company devs, simply the community wanting to validate previous work and learn as much as possible themselves.

10 Likes