How can I run a node and earn the coin?

Alright, so I have looked at IPFS FileCoin, Sia, Storj and ARweave Autonomi (SAFE)

What I want is to get, say, $100K of storage from Google Cloud Platform or Azure and then use it to earn coins in these platforms.

FileCoin makes it super-complicated, with software that will accept deals, “verified deals” for FileCoin Plus giving you 10x the earnings, and then there are “snap deals” and “sealing” which is very intensive and requires a lot of computing power.

By contrast, what would I have to do to just start earning the coin in Automomi? Download the software, set the CPU and hard disk limits, and that’s it?

How much would I earn today if I was to buy $100K in hard disk space? Would I be filling it with data already? Are there going to be block rewards or mostly fees?

I imagine there aren’t any block rewards in the traditional sense because there is no blockchain (unlike in IPFS) and each section is operating in parallel.

My main questions are:

  1. what are the steps to get started? DOES SAFE have something like this: Mining Guide | Arweave Docs

  2. about the data, is there enough of it to start earning coins now? What can one expect to earn, and what are the economics of breakeven and profit expectations? Does SAFE have something like this: Realistic earnings estimator - Storage Node feature requests - voting - Storj Community Forum (official)

  3. What would be the profits vs the underlying hardware costs in AWS or GCP and what to do with the coins once earned, what wallet does one keep them in, and how to transfer them to an exchange or a bridge, to trade them for, say, USDT on ETH… this is a good analysis of the economics: Reddit - Dive into anything

2 Likes

We are only in the testnet phase, so you’d earn nothing today. Tokens on the network are not tradable for $$.

Launch is scheduled for October, so that would be the earliest opportunity to earn.

As for how much you might earn, I suggest you join in with the testnets and get a feel for the process as well as read further here on the forum. Personally I don’t think it’s wise to jump in with a large investment until after launch to see what the network will pay as it automatically balances out the payment for the provided storage.

:ant: is a storage market and doesn’t rely on centralized control - so the price for storage will be a market price.

5 Likes

In addition to what @TylerAbeoJordan said.

The earning will be on people uploading data, and your nodes have to accept chunks from close neighbours without payment for any chunk your node became one of the closest nodes to when you node joins.

This means that it depends on how many people are out there who trust Autonomi (when it launches) to store their data on it. So then the more people you tell about Autonomi when it launches the more data to be uploaded.

You have no control over where your data will reside as every file will be cutup into chunks (min 3 per file) as it is being encrypted. The hash of each chunk will determine where in XOR space it will reside and since nodes are randomly placed in XOR space then its pretty much impossible to ever cause your file to be stored on any particular node.

Thus the more people the better for node operators to get new uploaded chunks and earn tokens for doing so.

Low probably very low.

They will be competing with all the people using spare resources on their home computers and internet connections. For datacentres you will be paying for the VM and the quantity of data transferred on the internet. Home computers will represent close to zero costs for the operator and data centres are much much higher in comparison.

Rough estimates is that there will be hundreds to thousands of people from home running anywhere from 20 to 500 nodes each at launch. In very early beta (where it is now) there maybe a few people with the technical skills to run nodes with CLI, and even then there are a lot of nodes. Later in beta we will have GUI to start nodes and keep an eye on them whenever you want to check on them or earnings.

The wallet will reside on the network with development ongoing at the moment to have hardware wallets interface and provide other means to do transactions.

For bridges, it’ll be via exchanges or other forms of trading

7 Likes

Earlier rewards for people who help testing before launch are to be announced.

3 Likes

And @JimCollinson is getting very excited over it.

3 Likes

@neo THis might sound like a n00b question but - THERE ARE NO STUPID QUESTIONS

When tiny files are uploaded and need to be padded (with zeroes?) so that we can get three chunks out of them, is it not likely that many will hash to the same value? Or was that all thought about 10+ years ago?

4 Likes

@mav implemented this if I recall, so I have no worries though it is still a good question. Probably answered at the time though.

BTW I don’t think it’s done by padding.

Also the smallest file that will be encrypted is three byes, so one and two byte chunks will be just that. I can’t think of an example where that would be an issue but one day I expect it to byte someone [cough].

4 Likes

The only thing I have heard is that the smallest file is 3 bytes, and hands in the air over smaller. There was, is it still?, issue of zero length files in directories.

AFAIK there is no padding. And for all the 3 byte files there is 1^256 hashes for each byte. Hopefully the hash function gives a unique value for different small chunks.

It would be if he hash function does not give a unique value for each value of a byte

3 Likes

Im not sure what happens with zero sized files atm either. This is the relevant code though (i was poking around in it due to a seek bug):

Not got chance to look now, but it should be easy enough to unit test it, if there isn’t something there already.

1 Like

The steps to get started haven’t really been put into a simple guide yet. I think a part of the reason for that is that things sometimes change slightly in terms of the commands needed but also because there is a new ‘safenode-manager’ which simplifies management of multiple nodes on a machine. Also, there will hopefully be a GUI available before launch.

The amount of data that will come in is unknown and willl depend on interest from home, business and enterprise users. The cost they have to pay to upload for example 1TB will vary according to the number of nodes running and the fill level of the network. The theory is that people will switch on and switch off nodes as it becomes more or less worthwhile running them. For that reason it would be best to avoid large fixed costs like buying a lot of storage online or hardware at home specifically to run nodes. Someone might think it was a brilliant idea to splash out on some capacity and then the price of storing data dip down for a few months as more people do the same and the amount of data coming into the network doesn’t rise at the same rate.

It seems unlikely that running nodes in AWS, Azure or GCP or any other Cloud provider will be profitable. People are doing it for the testnets (including MaidSafe themselves) during the testing phase because it’s easy and relatively cheap to get a lot of nodes up quickly and scale up and scale down. But that’s just for short periods and we’re sinking money into it we can afford to lose and not trying to make a profit or even break even. Because there is no reward yet. When that becomes a concern I doubt it will ever be economically feasible. The main killer is the network transfer costs rather than the storage cost. Many more GB of data are transferred than each GB of storage consumed because of the replication of data and the information the data needs to transfer between nodes to keep the network running. For example I ran about 100 nodes in AWS for a few days last year and got hit with a AWS bill for several hundred $. The amount of storage being provided to the network was about 100GB but the data transfer out of AWS was several TB and that costs a lot. The bill for the actual storage was a fraction of the cost.

And anyway, running a lot of nodes in a Cloud provider isn’t helpful to the network because it’s another kind of centralisation. It wouldn’t be good to have all the nodes running in AWS, GCP or Azure in a Region go offline at the same time. All providers can and do have issues from time to time.

The whole idea is supposed to be to make use of hardware that you already have at home using home network bandwidth you are already paying for. Some people will buy some specific hardware such as a Raspberry Pi 4 or other small computer or maybe just use an old laptop. It could take a loooonng time to recoup the cost of spending a couple of hundred $ on something to specifically run nodes. Some people will do it for the fun of it anyway!

The tokens earned will be transferrable to an exchange and then swapped for other tokens or even ‘money’. I’m not an expert on that. All I know is I bought some eMaid on Uniswap a couple of years ago. If you have $100K to play with buying eMaid while it is at around $0.60 might be smarter than hardware or storage and compute on a Cloud provider because the price per token is expected to rise. It kind of has to because that price implies a very low cost of storage on the network.

But welcome back! I hope you like reading because there is some catching up to do!

5 Likes

It’s also the egress costs from these big clouds which are very expensive, that makes hosting Autonomi safenodes in those clouds with expensive move the data download costs a ‘no go’.

Imo it’s much more economical for one to rent a cage at a local co-lo facility (prices are cheap, especially in a secondary city or smaller town) for simply power, space and an Internet connect, then one runs their own leased or purchased hardware sized to be extremely economical to run say 500nodes and that is it.

For example, Multiple Dedicated RaspberryPi setups with direct attached storage DAS sized for cpu core/clock and ram to host say 500 nodes running as a low cost RAID 1/2 setup, with cheap SAT or even cheaper spinning rust HDDs, where you network them together to gain access to the Internet is likely the cheapest way to go, where management is remote via simple SSH into such Autonomi safenodes , from your browser. J

It’s important for one to make sure the Pi chassis bought or put on lease makes it easy to hotswap out the faulty SSD or HDD. That SSD does not need to be large capacity, SSDs of 2TB in size doubled up for RAID are likely more than enough to support 500nodes, so if one does get a single SSD failure (cuz sometimes this will happen, as many SSD media vendors do have manufacturing flaws from time to time),

one can at least drive over to the local colo and replaced the faulty SSD without shutting down the Autonomi safenode when it is RAID configured (so the Autonomi safenode keeps earning and the reputation of the safenode stays intact so it does not get shunned by its ‘close’ group of safenodes doing PoW consensus when storing access keys (that protect chunks stored for the chunk uploader) to the Autonomi Network DAO’s ‘Key Storage’ Distributed Ledger DAG) .

2 Likes

I agree, I’ve looked into it. But I think it’s marginal at best and will completely depend on the price of the token over time which will depend on many factors. Lots of variables to consider but the one thing that won’t vary is the fixed cost of the lease with the DC which will be 6 months minimum or more likely 12. Thousands of pounds is a lot to commit without a solid return on investment. Unless one can justify it just for giggles or helping the network. Except that it just increases consolidation.

And I agree about the RAID. I know it shouldn’t be necessary because the network is the redundancy but you don’t want to have to drive out to the DC for every disk failure.

I was looking at Supermicro disk heavy servers of which I coincidentally have a few spare. But the entire concept is looking very sketchy and not the best use of my time.

3 Likes

That’s why the hardware running the nodes should also be doing something else useful, like run an ecommerce website for instance…, or be used for storing your home videos, your Biz zoom conference or Fantom meeting videos and notes, etc… :wink:

1 Like

Nowadays renting a dedicated box @hetzner is cheaper than leaving a half decent pc to run 24/7 where I live (NL), 40-50 usd cents/kwh…

And the green mafia won’t stop until no one can afford power anymore unfortunately. We are governed by idiots and taken advantage of by hyperscalers.

3 Likes

I’m really curious to see how the safe rewards will work. I’m afraid the world has changed since 2006, leaving your average desktop computer to run 24/7 may not be an option for many…

1 Like

Its getting that way here too. Something like 30-40 cents (AUD) per KWH here, not as bad as where you are, but isn’t the greens that is doing it, but state governments using their generation as a form of hidden taxation. Here states cannot tax people but can charge for services, so supply of electricity allows them to skim a lot of cream off the people indirectly.

Its why I am going to explore using low power devices (the odyssey at the moment) which has decent capability, I/O and even a RPi chip (& I/O) as a coprocessor. Its processor is a 4 core Intel chip.

The reason for trying it is the NMVe & SATA connectors, the many I/O ports and low power. The power brick is a 24 Watt max one and meant to power the device and the one SATA hard drive and the 1 SSD SATA and the 1 NMVe SSD (4 lanes)

Much better than leaving a desktop 32 core PC running 24/7. And I can even use them for work when I don’t have a need to run the PC.

I am also going to investigate running Proxmox on it for a router, pi hole etc as well as nodes.

2 Likes

Notebooks/laptops with batteries are good, but you also need to backup the access modem to the Net, with small UPS, incase you get a bleep, and hope your carrier is providing at least surge protection or power their own network separate from the house, otherwise your nodes go down , or you run your house aux power (Diesel or solar and maybe wind w/ batteries) and wire up your PC to that pony panel also running your fridge, stove, freezer, and furnace/AC using batteries or diesel generator as the prime mover for a spell. (likely 4 days reserve is enough in most places except Silicon Vallley where 10 days in the countryside is required)

One of the beauties I see with Autonomi over data centres is that you do not affect the network significantly if your rig goes down removing 20-200 nodes. Yes a bit of churning around the world, but not significant on the scale of things. For a data centre that has people from around the world using it then any downtime is extremely significant.

What happens with say a power loss or ISP downtime on consumer grade connection.

  • your nodes go down
  • you lose token earning power for the time of the outage
  • the network does a little churning
  • no one accessing or uploading data even notices

The same effects for no data protection vs RAID for your node storage.

The real question for home use is more

  • how secure is my power
  • how good is the ISP in terms of uptime
  • am I that worried about losing earnings for that period compared to the long term.

Answering that will be what personally I would consider the need for additional power protections.

In addition, I do also recommend that surge protection is essential for power company oopsies moments, lightning, and other power spikes. Especially if in areas with regular electrical storms.

For me I have UPS (approx 1 hour life) on my gear & the modem. ISP has backup batteries in their Nodes (about 15 minutes life). Although I do need to power the modem at my end. But I am not recommending others get it unless their answers to tolerating the potential losses as too great. For me is loss of design time with power glitches

2 Likes

So how does that work when my node goes down? If I recall correctly the network will make sure there are 5 copies of each chunk. When my node goes down, some other nodes get to store the 5th copy of whatever chunks I had, but what happens when my node comes back up after some hours ? Will there be 6 copies ?

1 Like

This kind of question doesn’t have a precise answer while things like bad node detection and how to handle reboots and so on are being tested and discussed.