Yup and mine’s a cloud instance - no NAT
ChatGPT now accepts uploads and can perform data analysis.
I uploaded a single log file with the most basic of prompts asking it to analyse the log and return a graph to visualize its findings.
Less than a minute later it spat this out.
Obviously a better prompt and larger data set will deliver better results but the potential to get more out of logs for novices like me that does not have mad skills like Shu is great.
"The above analysis has provided the following insights:
Distribution of Log Levels: The majority of log entries are of the TRACE and DEBUG level, indicating that the system is mostly logging detailed information for debugging purposes. There are no ERROR or WARN log entries in the data provided, which is a good sign as it indicates the absence of critical issues or warnings.
Frequency of Different Actions: The most frequent action in the logs is identify, which appears to be related to network identification processes. Other actions, such as client_req, HandleTimeout, and CmdError, occur less frequently.
Distribution of Events Over Time: The logs are evenly distributed across different hours of the day, suggesting that the system is active 24/7."
I saw such message when my client had 320 TCP connections.
I wonder if it means that most of the connected nodes just did not respond.
Is that via a plugin? I’m not seeing any option to upload anything? (I’m a subscriber).
Yes you need to go to settings, beta features and activate it. Then select it as you would the browsing feature when starting a new chat.
Beware that you can’t assume the output is accurate, only that it looks plausible.
Always, I wonder though when feeding it specific data and detailed prompts accuracy is likely to be better?
Yeah, I already have plugins activated.
I was actually using the ChatOCR plugin last night to extract text from images and the results were amazing. 100% accuracy so far. I can’t see any option for file upload though.
Could you DM me a screenshot please? I’m conscious of derailing this thread any further.
I don’t know enough to know when it is more likely to be accurate and when not, and I’m not sure anyone does.
It seems least reliable when it is faced with things that are not simple matches which makes sense. So maths and logic are prone to get real stupidity. Also, it seems unable to say “I don’t know” unless that is explicitly configured, so it makes stuff up instead.
At least with that graph you produced it is trivial to check by counting the occurrences of those keywords.
So I’d try to take a cautious approach and not ask it to do everything. I’d ask it to do things which are time consuming for me, such as write a script to reformat the data, or to do some analysis and output a CSV which I can then plot.
But with the CSV I could also check that what it produced is accurate before acting on the output.
To be fair, I don’t think we really need to worry too much about accuracy with this. It’s just a nice little tool to give you an idea of what’s happening. We wouldn’t base any decisions on this analysis.
Super late to the party… haven’t had time for the past 2 weeks to check back on here.
Nice to see another surprise testnet, will try to kick start a safenode now, and see what happens
.
oom-kill has been killing nodes again. I’m down to 28 of the 50 I started with 1 day and 7 hours ago on this 4GB Instance.
The RAM used by the nodes is going up. Is this a memory leak or as a result of normal running now? Does RAM go up during churn and we can get into a cascading effect of nodes being killed because of out of memory on machines and then more going oom and more nodes being killed?
The RAM usage for nodes is not uniform. It’s definitely up from near the start when it was about 24MB but is now spread from 53MB to 153MB.
This is the output of top sorted by RES:-
top - 04:42:51 up 1 day, 7:43, 2 users, load average: 0.08, 0.19, 0.44
Tasks: 162 total, 1 running, 161 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.6 us, 0.4 sy, 0.0 ni, 96.1 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st
MiB Mem : 3837.0 total, 78.5 free, 3405.4 used, 353.0 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 236.1 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2729 ubuntu 20 0 168616 153448 4436 S 1.7 3.9 14:43.51 safenode
2800 ubuntu 20 0 163440 148664 4660 S 0.0 3.8 25:44.77 safenode
2460 ubuntu 20 0 176568 148036 4956 S 1.7 3.8 13:55.30 safenode
2636 ubuntu 20 0 166320 146720 5496 S 0.0 3.7 25:29.29 safenode
2667 ubuntu 20 0 182168 142828 4824 S 1.7 3.6 30:07.80 safenode
2644 ubuntu 20 0 157040 142200 4488 S 0.0 3.6 10:08.98 safenode
2713 ubuntu 20 0 156672 141916 5256 S 0.0 3.6 8:53.34 safenode
2476 ubuntu 20 0 157440 141264 4520 S 0.9 3.6 20:40.68 safenode
2784 ubuntu 20 0 151524 138220 5956 S 0.0 3.5 8:52.44 safenode
2532 ubuntu 20 0 172864 137984 5060 S 0.9 3.5 28:29.10 safenode
2792 ubuntu 20 0 151584 137076 4880 S 0.0 3.5 24:33.66 safenode
2768 ubuntu 20 0 150652 135616 4964 S 2.6 3.5 22:02.76 safenode
2706 ubuntu 20 0 165864 133676 5864 S 0.0 3.4 25:43.16 safenode
2629 ubuntu 20 0 139820 126012 5464 S 0.0 3.2 26:00.79 safenode
2485 ubuntu 20 0 139264 123560 4512 S 0.0 3.1 15:18.79 safenode
2760 ubuntu 20 0 138296 122580 4912 S 0.0 3.1 30:48.04 safenode
2591 ubuntu 20 0 164424 119928 4072 S 0.9 3.1 28:08.97 safenode
2468 ubuntu 20 0 131132 117168 5700 S 0.9 3.0 28:24.27 safenode
2599 ubuntu 20 0 133828 117044 4592 S 0.0 3.0 12:26.01 safenode
2525 ubuntu 20 0 130444 111600 5664 S 0.0 2.8 36:25.38 safenode
2807 ubuntu 20 0 126144 108916 4436 S 0.9 2.8 31:38.29 safenode
2829 ubuntu 20 0 116248 102256 5252 S 0.0 2.6 30:51.76 safenode
2607 ubuntu 20 0 116244 100796 4460 S 0.0 2.6 26:18.20 safenode
2777 ubuntu 20 0 146244 98540 5516 S 0.0 2.5 31:57.35 safenode
2821 ubuntu 20 0 144812 70520 5208 S 0.0 1.8 35:41.92 safenode
2675 ubuntu 20 0 108764 67696 4564 S 0.0 1.7 28:39.87 safenode
2540 ubuntu 20 0 129772 55696 4600 S 0.0 1.4 31:42.32 safenode
2517 ubuntu 20 0 97392 53880 5244 S 0.0 1.4 41:53.44 safenode
'''
Charts are needed ![]()
But my guess is “leak”.
I saw 60 MB RAM for my node during periods of inactivity (low CPU and network usage) previously, but now node is at 160 MB RAM under the same conditions.
(Record count is even lower than before: 1007, did not knew that record deletion was implemented)
I forgot that logs have resource stats
.
Here is how RAM usage for my node looks like.
I did not made specific tool for such visualization (I use grep and regex101), so results are little rough. But I share data in case anyone want to analyze it better.
sn_metr_2023_07_09.zip (1.2 MB)

By the way, it looks like memory stats in logs for safe node are kinda wrong.
It looks like memory_used_mb shows Working Set value, when “in reality” process uses Private Bytes amount of RAM.
I think logging of Private Bytes makes more sense. In such case data will not depend on swapping (paging) activity of OS. Now, for example, if I open memory-hungry application, OS will give RAM to its WS from WS of Safe Node. Why show such events on charts?
Nice work again! You are the graph king!
I think you could be right about memory leak then but I think it’s triggered by storing activity. I started an upload of 1000x1KB files about 6 hours ago and an upload of 100x1MB files about 4 hours ago and that seemed to tie in with some node killing on my Instance both times.
Storage activity may result in RAM spikes.
That’s a different effect than slowly crawling leak.
Both can lead to OOM however.
I tried to upload, but it just keeps printing
Connecting to The SAFE Network...
The client still does not know enough network nodes.
I noticed several times strange activity for my node:
It starts to consume lots of download traffic (3-6 MB/s) for 10+ minutes (did not measured) and at the same time I see almost no fresh records in record_store directory and no corresponding upload traffic (it is ~5 times lower), so it is not rebroadcasting.
Only reasonable explanation is that such traffic is wasted - node requests some data which it have no interest in. If I remember correctly, someone said that nodes migrated from pushes to pulls, which means there should be no forced movement of data.
upd. It is happening at least for hour now. During this hour, node downloaded approximately 3 * 3600 = 11 GB of data and only 13 records with size of 0.004 GB were created (2750 times less). Total size of all records is 0.4 GB (28 times less).
I wonder if it’s just that the node you are trying with:-
export SAFE_PEERS=
is dead. That’s what I found earlier. Maybe some of the others listed in the instructions are still working.
But feel free to hit one of my nodes that is still working:-
export SAFE_PEERS="/ip4/54.82.83.176/tcp/35175/p2p/12D3KooWPhVptRnwNXfqtENhq1GNxCnyNBPcJu7xALPFQ72KTBii"

