@tfa are you using http://safepress.io/ or another method for updating static pages?
Do you have a link to that. I honestly thought it was still 6. I need to keep up
Modifying a static site is a matter of modifying a file with NFS API (index.html in the simplest case). This can be done in rust using safe_core crate (this is what I did) or in any language using safe_ffi (but I didnât test that).
No worries, we are altering 101 things in many tests, so this is part of all that. There should be at least 4 copies across the network. The 3 data types seem not to be required now (backup and sacrificial). We also have something pretty neat coming along with regards to this.
This has changed since the merger of immutable data manager and structured data manager. Now chunks are managed like SD objects, meaning one copy in each member of the close group around the ID of the chunk. And currently close group size is 8.
But that may change again in the future.
I did a restart of the seed vault at midnight UTC for maintenance. Please restart your vaults if you would like to reconnect.
I am only seeing my own vaults after restarting them all. You will need to restart in order to reconnect to community1.
It is a weakness of the current safe_vault version that if the âcontactâ IPs disappear even for a short while then the other vaults are not able to reconnect without restarting the vault manually. They appear unable to reconnect to a previously lost contact node. The future activation of the bootstrap cache might remedy this.
OK, thereâs seven now, and I quickly created a username and uploaded the same site as usual: http://test.bluebird.safenet
EDIT: I wasnât expecting to end up restarting the vault. I regret not giving warning of this! I was hoping to archive (rename and move) the Node.log file, via a script, expecting that the vault would simply create a new log next time it needed to write something to the log, but⌠it didnât, and a restart was needed.
A periodic, preferably daily, archiving of the log is needed to keep the statistical processing of the log, for the plots, within a predictable limit. I will test another approach on another Linux box: of swapping in a template/stub log file, on a running vault. If that works then the vault could be left running indefinitely.
Archive Nodes .
Is it technically possible to log IPs of participating vaults?
I only have three nodes in my routing table. Iâll do a reboot of the seed server in a minute.
EDIT after half an hour: I still see only three entries in my routing table. You have to restart your vaults in order to join the network.
Yes, you could run a packet sniffer and collect the IPs of anyone joining you as a vault or a client. Thatâs true of any SAFE network.
Iâve chosen not to do so because the log gives me the aggregate figures for plenty of interesting data.
When proxy cache is activated in a future version, thatâs what your vault would be doing: writing a list of IPs that had had their vaults added to your routing table.
In this case it would be easy to target vault hosts of a particular network legally, wouldn´t it?
I expect so. They are probably hoovering up all of our meta-data, including participation on this forum.
But only geeks or dreamers, not active criminals, are interested in test nets, so all theyâre collecting is a very looong list of potential future criminals, with no way (if the tech works) to tell what any one IP is accessing or uploading.
Sure, that wasn´t my point. Just wanted to understand predetermined âbreaking pointsâ of network participation rate.
Theyâre not unrelated:
Anonymity/pseudonimity is simply the size of the crowd that you can hide in, that an adversary would have to search through. It isnât a binary quality. The best anonymity would be the world population and the worst would be people with some combination of traits associated with identifiable people, a âfingerprintâ, that only you have. Acceptable anonymity is a crowd big enough to be impractical to search through.
So on a test net of a dozen people, I wouldnât upload anything illegal.
Think this should work if I understand correctly what you are wanting to do
cp Node.log Node.log.20160523
>Node.log
If that is the case you could use logrotate on linux, with option âcopytruncateâ in the logrotate conf file.
https://support.rackspace.com/how-to/understanding-logrotate-utility
Thank you for the suggestion. That does indeed work, unless the vault is actively writing to the log, in which case the zeroâing command has no effect.
Since that is going to be a small percentage of the time, to reduce that possibility, Iâll have to exploit the fact that a successful zeroâing changes the time of creation of the file, and loop until that happens.
What I was doing, on another machine and not the seed vault, was waiting until the vaultâs console showed that it was quiet and then running a script that does:
mv Node.log 2016.05.23.log touch Node.log
Since the commands would be a few milliseconds apart, no problem, right?
But one of three things would happen: Either Node.log would stay zero regardless of what the vault was doing, or nothing would happen on both the console and the log regardless of other vaults on the same host starting and stopping (which normally is guaranteed to get a response), or, incredibly, the vault would continue writing but to the second file. I was watching this in file explorer and saw that more than once.
hi all,
slap a copy of webmin on the box and use it and its nice web interface to run the log rotation etcâŚ
rup
@rupert Thanks for the suggestion but my purpose is to learn the use of low-level building blocks rather than only to provide a server.
ah, okâŚ
use logrotate?
or lower level?
rup