Totally agree. Freedom to choose is a society we should all strive for.
Just because you want a nice work/life balance doesn’t mean everyone does.
Most of the highest achievers in human history probably didn’t have a “perfect” work/life balance and we should be grateful for their passion, desire and determination to keep going to give us the advancements we all benefit from today.
Hopefully we will look back at David and the team like this in the future for what they have given us. They sure seem to be following that path.
Thanks to all the Autonomi team. Keep going, you are smashing it!
These are not bribes though @happybeing and @Bux has had the teams all meeting up in person recently, attending events and she is first to tell folk to slow down and take a break. However we are in launch mode and some level of positive stress for that is good IMO.
Folk are not leaving due to being pressured, but in this area now devs can turn over in 2 years, we are lucky in that respect. Some will fall out when personel changes and whatever, but that’s also normal.
Others like me tend to be on at a ll sorts of times, but it’s because, like @chriso said, we really want it to succeed and feel good about pushing hard to make that happen.
But nobody is pressured to work unreasonable hours at all. I have seen many of the new business teams on at all times day and nights recently, most devs are not and they do work a 9-5 type existence and that is fine. Some work crazy hours and that is also fine, but we do try and tell folk to rest and take a break.
As so often the rebuttals miss the point - which I have already re-stated and clarified with regard to choice.
I didn’t mention ‘pressure’. I focused on the effect - and made it clear what would suggest a problem here. Of course people leave for many reasons but that’s not what I picked up on.
Nor did I mention ‘perfect’, suggest people should not have choice, suggest 9-5 is best, or that working weekends or very hard is bad.
I recognised and warned about these risks months ago because I’ve seen this close up: done badly and done well. There are misconceptions about productivity, and toxic work cultures arise because of those misconceptions, and high rewards are a common but not defining feature.
That people seek those rewards and feel grateful for them is no surprise - that’s why they have an impact, which can be good or can be bad - on both efficiency and individuals. I sought them myself.
My experience is that a culture where people are working too hard for too long is less efficient - takes longer to deliver - as well as causes long lasting damage including ending careers. And it’s often higher performing people who suffer those effects.
Telling people to take breaks while creating a culture that goes the other way is also not uncommon.
I don’t know that is happening here, but am concerned because what we do know looks like it may be. And the responses to my point remain consistent with that being the case.
That would be good news! CPU is currently the limiting factor for a lot of users methinks. I can run 10 nodes on a RPi4 but I get a bit of shunning. But I can run 20 no problem on my big desktop with only 4 cores allocated.
When the nodes are fuller I think I wouldn’t even get 10 nodes started on the Pi. At the moment starting each node causes the Pi to be very busy for more than 3 minutes as records are downloaded. It shows as being more than 50% busy. The network seems to be less than 5% full at the moment. When the network is more full there will be more data to be downloaded and I can see the Pi showing as being more than 50% busy for more than 5 minutes which will cause nodes to be stopped in accordance with that new function.
Yes I know. But my point is that in the future there will be lots more records to download when the network is fuller so while RPi4s and other low powered devices like old laptops and NAS boxes are feasible that won’t be the case when it takes 10 or 15 mins to download the records to a new node. At the moment CPU is the limiting factor for starting enough nodes to provide a decent amount of storage but in future RPi4s, old laptops etc will be excluded because their CPU will go over 50% utilisation for so long even 1 node will be killed after 5 minutes and even if that one starts we won’t get another started. So a reduction in CPU usage would be very interesting.
A wee read through the Formiciao thread should allay some concerns.
This is where some of the performance increases will come from. Ive wondered myself for years how much faster it could be in production with optimised code and no or minimal logging.
@Southside Agree this is important, would be ideal to have permissive logging Switch cli access via SSH , ie turn the logging on/off and likewise with separate switch turn on/of read only access to logs to a select configurable remote address, with access to logs subject to a password one can change to allow Maidsafe or other third parties to selectively interactively read, or auto read logs and to help with debugging say a problem or a new release.
This way we can opt out if we wish, storing logs and allowing access. Should be node or group/fleet of node selective too…
Keep in mind the Pi4 is only PCIeGen2 on its bus, so if your storage is SATA SSD, you will want to keep 30% of the capacity unassigned so defrag, GC and and wear levelling background tasks have enough spare space to do their job quickly, all the time, so you can avoid getting shunned because of this background stuff competing with foreground safenode CPU % use of resources at the most inopportune time.
downloaded the latest node-launchpad and after trying to set it up I get this error and it wont start again.
Message: index outside of buffer: the area is Rect { x: 0, y: 0, width: 238, height: 29 } but index is (235, 29)
Location: /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/ratatui-0.29.0/src/buffer/buffer.rs:253
look like a terminal drawing error. the window has some off by one error or something.
Did you resize the window?
Its a TUI, not a GUI and as such it often falls down if you try to treat it just like another window.
PITA of course but I always got the impression it was built in a hurry to do an acceptable job for an acceptable range of users.
Try it as a full screen window and I bet it works OK for you
The 64GB safenode store size increase, will actually cut comms traffic in half and CPU % utilization in half, given there are half the nodes now with the same amount of disk storage allocated, so IF one has the extra disk space and RAM to double up, one can really double up the safenode count on your system to re-use the extra freed up CPU % clock , as the existing CPU should be able to handle the same number of nodes, with larger storage capacity. as it did before with the same core count and thread count with the half the safenode store capacity. Sure its using more 2X more memory.
As such on this test machine, I have allocated 105GB to service 3 nodes, now I will up the the storage use to 128GB and only run two nodes, because I don’t have the space on this old klunker of a MSWIN11 8GB RAM 256GB SATA SSD with intel i5 4core8Threads, so the two remaining nodes will run fine and be more performant than three nodes without any shunning going on, plus I have more memory and CPU clock back to make my daily Internet Driver UX much faster…, Its a good tradeoff.
Which is no longer valid by now btw - 2mb was a pretty good average back when native spends were stored on the network (and for every chunk there needed to exist one spend) but with current change to erc20 payments there are no small spends on the network and it’s all just big fat chunks… So we’re at definitely 64gb nodes since the switch to erc20
I was thinking that when I wrote it since most of the records stored are 4MB in my nodes.
I guess they still justify calling them 32GB nodes because they expect on average the nodes will only get to 1/2 full before they cost too much. LOL good luck with that since the WP came out.
They should just call a spade a spade and say they are 64GB nodes we are running
actually I would think this is just due to no-time and a glitch - correct me if I’m mistaken @JimCollinson@Bux@rusty.spork - but I think it’s safe to assume we’re at 64GB and there’s no way to twist or turn it
first network use case (due to the fee structure and stuff) will most certainly be storing rather large files that get then split into full 4MB chunks …
decentralized physical infrastructure … being used under the hood of applications possibly without the user even knowing (so the user doesn’t pay so the ones paying are the app devs; which gives an incentive to create fewer larger blocks instead of many small ones; not necessarily because of the storage cost but simply because of the Blockchain-TXs fees that will most certainly dominate at the beginning … and even if the user knows and would be paying there would be good reason (satisfied customers) to save on cost and do the same …)
Imho the 2MB average estimate was pretty safe to assume back when we had the native currency but now it’s absolutely wrong (and the assumed max storage space per node by launchpad @35GB is wrong and too little too btw if nodes would fill)
Yes, I confirmed with @qi_ma and Jae (what is your @ over here?) that a node will take 65.5 GB at the max. This is NOT reflected in LP which displays 35 GB. This needs to be updated.