Just a little careful soldering.
Its rarer than you think. The first time I had aircon in my house was in my retirement, fans was the norm before that. Yes businesses are aircon. The number of houses with aircon is increasing a lot now, but certainly a long away from the US
Also heat from 20 watts of a small computer will do nothing for a house without aircon considering the KWs of heat entering through windows, walls, floors (houses on stumps), ceilings. Thus open windows so heat can escape again on the warmer air leaving the house. Only in last 20 years has rental homes had to have ceiling insulation. Not walls or floors. A typical home requires 4 to 8KW of cooling to keep an average home with ceiling insulation cool. By cool I mean 25oC (77oForgotten)
So in Australia we used to just open windows and use fans to keep cool in Summer. Increasingly aircon is being installed and to do a whole house there is a 6 to 8 KW cooling unit in living/kitchen area and a 1 to 2 KW unit in the main bedroom. Depending on house maybe one or two more units. (our neighbour has one for each bedroom plus one for main living area - they have a more expensive place)
And in winter heating is welcome, the further north you go (above Sydney) no houses have special windows (double pane) and usually not air tight either. Its why Aussies don’t have the deadly mould issue most homes in the US have to deal with one way or another. And yes this is changing slowly as people try to save energy. Tassie though is snow country and so its much more like US homes.
tl;dr much of Australia would not even notice 20 to 100 watts of heat from a tiny/small computer setup.
I believe for most of these small things that take USB power you can buy an adaptor to have a barrel jack to USB connector and has the required electronic in it to tell the device you have x Volts and do the handshake
Or solder onto the power rails as @TylerAbeoJordan says
Shingled Media Record SMR Drives are slow because the storage controllers used by most implementations use what are effectively old tape drive read and write algos still used on their rotating media cousins HDDs. Flash is completely different, while CMR Common Media Record drives are better than SMR using the same storage controller algos, there are new algos coming which actually 8X the speed of SMR to be equal to 2X improved CMR write and read speeds (using the CloudProx Linux Kernel Module software controller), and SMRS are 25 to 35% cheaper. At the moment though SMRs are behind from WDC in the price/performance comparison. I expect that to change in 1 yr where SMRs will actually be a better deal than CMR HDDs with offers coming from new software controller offers for SMR HDDs coming from an established (since 2006) software vendor I know of writing and supporting these drivers to work on any standard Linux box using OTS SMR large capacity HDDs from WDC which will match CMR speed and be 25-35% cheaper.
Smr will never be a good deal unless you really don’t care about performance or availability. Adding another driver/firmware sauce on top of already unpredictable sata and sas firmware is not going to make it better either.
Regardless of smr or cmr, drives are getting way too big. It’s all fun and games until you have to suffer for weeks for an array to rebuild and another drive dies in the meanwhile. I’ve never seen a happy 22tb drive enterprise user.
Also, I am really enjoying the nerdiness in these threads. We should team up and offer a rescue service that can come and intervene when someone needs to bore their mother-in-law out of the house
Agree. During earlier test-nets, when the network was not very efficient, I had great success running a Guzila mini pc, scrubbed of Windows 10, for Linux. External 2 tb ssd on a 3.0 usb. Worked great. I do have a shit load of bandwidth on a dedicated fibre line however. But not a lot of tinkering involved. The network was designed for low end machines, and everyday people. This is huge I think.
I plan on using the Odyssey I have here. It has M2.Nvme and also a M2.SATA, both on board and also connectors for 2 sata drives. I ran a M2.nvme and a disk drive on the sata connectors powered by its upto 24 watt pwr brick.
My company is getting some in the next few weeks. But they will be in a large object storage system. So ‘rebuilds’ are spread across hundreds if not thousands of drives. With more than 8,000 drives the data from a couple of failed drives is constantly being rebuilt somewhere. I agree that a 22TB drive in a something like 6 disk RAID6 setup will take days to rebuild and stress the other drives so much their risk of failure will go up. This is one of the reasons why Object storage has taken off. It’s the only real option for large storage systems these days.
If SMR drives stop being total dogshit and become useful for something I’m all for it. But the manufacturers have a long way to go to rebuild trust in the concept due to their behaviour of ‘submarining’ the technology into established brands without telling anyone and most people who used them having a terrible time.
I’ll just go assume you’re talking about a ceph cluster here. I don’t know any other storage system that can deal with these insane amounts of data.
Filling up a 22 tb drive, will take you 30 hours if you can keep the data flowing and the write cache full and you only write sequential data.
But you won’t, because the crush algorithm will decide which pg’s to rebuild and the drive head will go up and down and left and right, killing your throughput (and your time to rebuild).
Assuming you’re filling it up with an average of 75 pg instances per disk, meaning your disk has 75 replica’s of data that is spread out through your cluster on other disks. If one of these other 75 disks dies while you’re rebuilding the first patient, you will have a single replica left which means ceph will stop serving data that’s on these pg’s and the consistency of that data can no longer be guaranteed.
That is if you are using 3 replica’s. Going with erasure coding will make it progressively worse, rebuild times will no longer be counted in days but in weeks (if you’re lucky).
22TB drives would be great if throughput and latency wouldn’t still be stuck in the stone age…
Actually it’s worse than that I realize. Each of your 75 pg instances talk to 2 other pg instances somewhere, so a dead disk puts 150 other drives in the crosshairs of misfortune.
WDC has plans to ship SMR in volume initially as replacement for old tape systems which are still used today by many organizations, that is facilitate old historical data which is still important currently stranded on tape and move it on too cheaper SMR as an alternative archive method, where it is essentially read media after written once. (It’s ‘writes’ which wear out Flash media.) Certain applications which are small read record, or streaming data(video or audio) will benefit from SMR cost wise as a much faster archive retrieval method, as the immutable data (per file) stored is perfectly linearized on cpy to the SMR to make reads really fast.
So SMR today has an emerging cost effective place in the storage market as archive method of storage, it’s just not suitable for safe network nodes, other than for archiving immutable snapshots to aid in quick low cost recovery of data, if a physical breakdown disaster strikes the node’s production r/w HDD or SSD storage.
Good lord that is a frightening perspective.
Virtual tapelibrary systems have been a headache forever, they have never lived up to their promise. They were a mess back in the sepaton days, they’re still a mess. You have to replace them with the new model every 5-7 years when the model you buy goes EOL, they come with endless maintenance contracts and they just can’t deliver the thing that a backup tape is good at: the ability to be shipped offsite to an ironmountain facility and sit there uncompromised by bugs, datacenter outages and endless firmware upgrade cycles.
I’m actually thinking SMR drives might be suitable for Safenodes! This is on my list of things to test.
I think their pisspoor sustained write rate won’t be a problem given that they will be behind a network connection that will throttle the data transfer rate to and from the drives. For low numbers of nodes anyway. Not hundreds. And with node startup staggered, which I think will be necessary anyway so nodes aren’t booted from the network for not responding quickly enough during the flood of activity when nodes startup.
SMR will work well provided you have an in memory fast storage controller which sets hdw interrupts to on, to let the in-memory code handle all i/o and related gc, defrag and wear levelling, that means running the FTL Flash Translation Layer in memory… checkout the LF SEF project and watch the videos to understand where this is going with FLASH… XIOXIA (Toshiba Memories America) handed over their SEF innovation in Oct 2023 to the LF, it will take time for the community to adopt it, SEF is the future of storage in the Linux sense, easily ported to BSD/MacOS, and probably MS WIN environ too at some point. Given MS Azure Win 11 is simply a Guest OS running on top of KVM in VM with Linux as the Host OS underneath, you can run WIN 11 in the same fashion on your desktop or home storage appliance and get good write results with SMR, if you do the above. It’s what we have been working on for along time at my day job…, we are just looking at SEF now to adopt the SEF Taxonomy of terms in our CLI and API for our own fast storage software controller LKM “Linux Kernel Module”.
run…, CEPH, GLUSTRE and LUSTRE, take your pick, file/obj. dist. store flexibility sure, slow as shite? yup.
Its not that slow for a farming use case… so far… provided enough HDD spindles, internal network bandwidth, RAM, and hosts (excluding flash storage here). It can still create generate enough I/O to easily handle the nodes’ I/O requirements.
External bandwidth pipe (WAN) will get maxed out first prior to internal i/o bandwidth on say a distributed storage platform since you can grow the internal platform, but are likely capped at an upper limit with your ISP and relocation isn’t an option, if the sizing exercise is done properly on an initial setup.
Anyhow, definitely isn’t low tech and low complexity setup either, so not for most consumers.
Interesting background info on the SMR and the advancement there on the LKM area with SMRs. Thanks for the info there. I appreciate it.
For me, I am choosing not to go with any flash storage, I simply don’t like the wear out, though reliability might be much higher than rotational disks. I will continue to opt for used hard drive spindles at a steep discounts, and turn them into a CEPH clusters specific for Autonomi farming.
I definitely prefer the flexibility and scaling (horizontally) of the file system over 100s of tiny individual hosts and their attached individual filesystems in terms of management. I rather stack up on refurbished 1U or 2U servers (steep discounts) than tons of RPIs, and have a higher density of safenode pids running on each server than the lower limits of a RPI. Not to mention the non standardized shelf space and organizational requirements in say a server cabinet for their form factor, which would be a bit annoying.
Its not a plan that fits for everyone, and that is perfectly okay… .
Autonomi as we all know welcomes an array of mixed nodes and sizes for everyone to participate in
I perform an in depth AFR (annualized failure rate) analysis on 10Tb, 12Tb, 14Tb and 16Tb hard disks using 230,000 drives SMART data as a data source. We find out which manufacturers perform best, and which models are the lemons to avoid. All these vendors state their drives have an AFR of 0.35%, but who is really giving the accurate picture?
I analyze data from Backblaze for over 345,000 disks over 10 years to identify how hard drives compare for lifespan, reliability and failures between the major disk manufacturers, and which are the best HDD. Analysis includes both consumer and enterprise disks from Western Digital, Seagate, Toshiba and HGST. I also cover the history of consolidation in the HDD manufacturer space, including the history of companies like IBM, Fujitsu, Samsung, Maxtor, Quantum and others.
Interesting he doesn’t really cover the controllers, software that oversees read, write efficiency and how it can wreck drives.
Any major benefits on farming rewards using an nvme ssd instead of a regular ssd?
Assuming CPU/bandwidth isn’t the bottleneck.
From my experience so far 10 nodes run quite comfortably with just 1 CPU and 1GB memory (this code is nice and lean!), but I have seen CPU usage spike at times for some of the nodes. I guess under heavier usage aiming for max 5 nodes per CPU is probably more appropriate.
Memory runs at 30-100mb per node.
I might experiment with even more nodes to see if I can break something and get them blacklisted for being too slow etc