Scaling Bitcoin with SAFE

Wow, how come I didn’t think of that first!?

And how am I going to append validated blocks to your read-only copy of blockchain?
Or are you suggesting a major architectural improvement to bitcoin, which does away with the unnecessary validation of network consensus by each full node, and makes it possible to have just one trusted person - such as yourself - inform us what transactions happened on the Bitcoin network?

That is a truly major breakthrough. I can’t believe thousands of full nodes are painstakingly duplicating the entire blockchain instead of reading a single Web copy! All all those unnecessary arguments about the maximum block size - with this innovation, it can truly be unlimited because your approach solves that issue too.

Why bother? Since we all get blocks from you, we can do away with all that full node software - that too becomes entirely wasteful, outdated and unnecessary. All each of us need is few hundred lines of JavaScript code (preferably stored online as well) to run our super lightweight wallet and we can get all the benefits of Bitcoin anytime, anywhere!

2 Likes

You would need 2 layers of P2P, 1 layer where you connect to all full nodes from bittorrent, the other to connect to SAFE. As @janitor mentioned, all nodes on bitcoin need to verify every block and every transaction. So you get new blocks on the P2P part from Bitcoin, check them and than store that block on your SAFE Drive. But next time you restart your computer, the bitcoin software needs to go back a number of blocks to validate and get up to speed with all the new blocks. Again, you need a connection to the bitcoin p2p-network and the SAFE Network. Why on earth would we want that? I think when people experience the speed of Safecoin, they don’t want to use bitcoin any longer. Or maybe keep a cold wallet somewhere in the hope the price of this “digital gold” will go to the roof sometime.

2 Likes

I see I needn’t have bothered trying to be helpful.

There seems to be a misunderstanding about storage vs retrieval when it comes to de-duplication. What you try to store may already exist, but it has no baring on what you retrieve.

That’s not true. But to be helpful, you need to be helpful.

The problem is if you don’t know how bitcoin (or SAFE) works, then you can’t be helpful.
I tried 3 times to talk some sense into you and the other guy, @polporene tried too, and you still don’t get it.

And no, there is no misunderstanding when it comes to de-duplication.

The whole point of bitcoin is that full nodes most decisively must not rely on 3rd party to tell them what transpired on the blockchain. And secondly, from release notes for the current version (released months ago):

This release supports running a fully validating node without maintaining a copy of the raw block and undo data on disk. … The user specifies how much space to allot for block & undo files. The minimum allowed is 550MB.

So, in case you didn’t know, there’s something called disk pruning, which allows full bitcoin nodes to optionally automatically delete unneeded blocks (based on their own definition of “unneeded”) and in the most extreme case one needs only 550 MB of disk space to run a full node.

550 MB is nothing by today’s standards (even 50 GB (entire blockchain) isn’t a big deal - a microSD card can host the entire OS, full node software and the entire bitcoin blockchain).

Therefore the “solution” proposed in this topic is undesired, unworkable, and unnecessary.

Are you suggesting that safe net will be prone to data manipulation/corruption by a third party? That would be the only way a 3rd party could change said history.

A bitcoin node will read/write its own data to a safe drive, just as it would to a local drive. Each node will still independently verify Bitcoin transactions, just as they do when saving to a local file.

1 Like

Because the Blockchain is more than Bitcoins.
If we focus on the currency aspect, yes, SafeCoin is the closest cryptocurrency to cash and I also think that eventually SafeCoins will replace Bitcoins.
But for projects that exploit the Blockchain as a means of permanent ledger the story changes and it makes sense this hybrid of SAFENet hosted blockchains.

No, I am suggesting the model you are suggesting relies on the trusted uploader which is a ludicrous idea.

I already covered that scenario above: if all nodes have a local copy of the blockchain, what advantage so they get from making another copy on SAFE?

No, that is not what I am suggesting at all. Read what I wrote again.

De-duplication will make storing the full, complete, blockchain cheap and unlimited by local storage capacity.

You won’t need a local copy either, as outlined above. You read/write to your version of the blockchain on your safe net drive directly.

1 Like

@tfa is thinking about a modified client where the only local copy is the one in SafeNet. (Actually there might not even need a modified client, in settings you simply map the location for the blocks to the SafeNet’s virtual drive)

We may think of two scenarios:

  1. The blockchain is publicly accessible, every Node collectively helps to maintain updated a single blockchain. This would be terrible security wise for the Bitcoin Network. I think @janitor argument of trusting an uploader refers to this scenario.
  2. Every SafeNet user keeps his blockchain private (and yet they are deduplicated in SafeNet). In this case nobody has to trust anybody, and if there is a bad player his invalid block would simply create an extra private data. The rest of the good nodes would still get their blocks deduplicated as all their blocks looks the same.

Am I wrong with this reasoning?

1 Like

This is correct, instead of writing the blocks to something like appdata/roaming/bitcoin you write the blocks to your Safe drive. If thousands of nodes do this, than the network would only store like 3 of 4 copies for all the blocks. But when you start your Bitcoin QT client, running the full node, you need to get all these blocks out of SAFE (GET’s) to read them out locally. That’s the thing with the Safedrive, it will show you that files are there, but only when you access them (like clicking an mp3-file) an actual GET request is done to the SAFE network. So you start SAFE, do GET’s for all blocks, then the Bitcoin client loads them, sync with the Bitcoin network. That’s not an easy thing to do.

It will be way more easy to just run a local Bitcoin client. You need the blocks locally anyway. And you need to connect to the Bitcoin p2p-network as well.

As @polporene explained just above, that cannot work in any meaningful way.
SAFE client cannot modify data on SAFE, that’s one of several reasons why there is local cache (and if there was not, bitcoin client would be unbearably slow).

Adding to his explanation, I’d also like to note that in addition to having a local copy of the blockchain, your bitcoin client would work slower, because compared to the usual Bitcoin download (and upload, but most people don’t upload) you would have the added workload of having to upload incoming data to SAFE which would deduct from your available bandwidth and add workload to your disk drive. Completely meaningless.

People pretend to be paranoid about security and privacy, but now we’re already talking about a modified bitcoin client! Who is supposed to make modifications and maintain that code?
(By the way, you can run a thin bitcoin client which utilizes public Bitcoin servers or third party API, if you want to save space and not run a full node).

Yes, that’s what has been proposed but it cannot work without the entire data having a local copy in SAFE cache, which negates all benefits of the idea and has some additional drawbacks (related to performance, cost, etc.)

1 Like

I got a question:
How does the data retrieval work? Lets say that I have a 1GB file and only need to only retrieve one chunk in the middle.
Does the network only retrieve the chunk I need or does it have to download the whole 1GB file?

If you have a 1GB file, it’s Chunked into 1024 little Chunks of 1MB each. I don’t know exactly about public data, but it seems to me asking just one chunks would be possible. Your client sees you have 1023 Chunks already, and it will ask for the last one as well to make it complete. GET’s are done just that way, your closest nodes won’t have a clue if you have a file complete or not. They will just give you a Chunk whenever you ask for it for free.

Firstly, this wasn’t the criticism that you first levelled at me. You made a straw man argument and quite rudely knocked it down. A simple acknowledgement that you had misunderstood my assertion would make this debate more fruitful.

However, we are now on the same page and can hopefully discuss this framed context , without the hyperbole.

As you are no doubt aware, it takes a long time to get a Bitcoin full node up to date. It requires the downloading of the blockchain and the validation of all transactions to date. Once this had been completed, your client only needs to validate new blocks when it is restarted.

So, what is needed to restore a full node on Bitcoin client start-up? The client has already verified the blocks up to the last time it was started, so it only needs to restore the previous state (in memory) to continue validating new blocks. We know this is relatively small as people don’t currently have nearly as much RAM - a 2gb machine is adequate, including running the OS.

Serializing the content of RAM and restoring it would therefore be a rudimentary option. Running the node in a VM allows the whole OS to suspend to a file too.

So, do we need to read in the entire blockchain every time the client starts? I don’t see why. Moreover, the VM could also suspend to the safe drive, resulting in relatively small download to unsuspend the VM.

Yes, it means both communicating over the network for storage I/O. This would be on top of the demands made by the client. However, @dirvine has already outlined his view on this - he assets that broadband will become fast and cheap enough to ignore the need for dedicated local storage. In short, your local disk would become more of a cache, than a place to store your files.

IMO, a 2gb or so download to boot a Bitcoin client is not obscene, especially if it is run for days/weeks at a time. Moreover, I don’t think that bandwidth would necessarily be an issue for I/O and Bitcoin traffic. Will it be slower? Sure, but will it be fast enough? Probably, on a decent connection.

Will it be cheap enough? If unmetered, probably - of de-duplication works well, the bandwidth cost would likely be the main overhead to consider, as storage would be near free.

Ok, there are many considerations, but fag packet calculations look favourable to me, especially once the client is up to date. I also suspect that using a VM is wasteful and better approaches could be done - pruning could play a role there too.

I wouldn’t be so quick to dismiss this. De-duplication could prove very cost effective here.

2 Likes

I am no doubt aware that you are wrong, and this is getting outright hilarious - how you incessantly display your ignorance making one wrong statement after another. Because you can’t/won’t just give up, you’re making your lack of understanding more obvious with each post.

By now it’s become clear that that you have a poor understanding of both bitcoin and MaidSafe. I don’t like that I have to tell you this here, but it obviously must be done for this topic to be concluded.

From a previously shared link about Bitcoin Core v0.11:

Because release 0.10.0 and later makes use of headers-first synchronization and parallel block download (see further), the block files and databases are not backwards-compatible with pre-0.10 versions of Bitcoin Core or other software:
Blocks will be stored on disk out of order (in the order they are received, really), which makes it incompatible with some tools or other programs. Reindexing using earlier versions will also not work anymore as a result of this.

So now that your main premise - in addition to all the other ones - is clearly faulty, what now?*

Quoting some minutiae which does not change the proposed solution, while insulting me, is not conducive to debate.

If you have a point, then make it. I have had enough of debating with your straw men and your attitude.

  • To add, obviously I didn’t mean downloading an actual blockchain file, but the blocks in the blockchain, from other nodes. I assume that is why you quoted the above.

Edit: Ofc, if blocks are now saved essentially randomly to disk, then it would work against de-duplication. However, if the Bitcoin devs wanted to optimize for de-duplication, they could structure the data in such a way to encourage this.

  • We already established that all SAFE users would have to have a cached local copy on their system to use bitcoin
  • Out-of-order download in Bitcoin Core exists to make download faster and the size of blockchain less relevant
  • Pruning exists to make the local space required less than 1 GB
  • Next year several disk vendors will start shipping 16,000 GB large SSDs

If bitcoin devs wanted to “optimize” for deduplication, it wouldn’t be for deduplication in general (because there are variable/sliding block size as well as compression techniques used by various other solutions which can deduplicate/compress the blockchain just fine), but only for MaidSafe deduplication.

The main performance and space-saving features from Bitcoin Core 0.10 and 0.11 would have to be un-done (!), and users go back to times of Bitcoin Core 0.9, in order to help your weird use case scenario (which purports to save space and improve performance) come to life. Sadly, you’ve ever earned likes with that bizarre idea.

Well, these are Bitcoin changes within the last 7 months, which I was not aware of. It has been a while since I ran a full node.

Tbh, I don’t really understand your aggression. We are here to debate, to learn, to fill in gaps of knowledge, to help form ideas which move things forward.

If you had mentioned out of order blocks in recent Bitcoin clients at the start, we could have crossed that off the list.

We don’t have long to wait either way. It will be simple enough to experiment with when we have a full safe net to play with.

There’s an open question which I have not studied about what it means to use these new options.

Isn’t it true that bitcoin needs some nodes to be able to access all blocks, and that pruning will be very limited - ie to dropping the few blocks that are no longer needed in order to establish a new current balance. I have no knowledge of the implications, but it seems likely that there are compromises related to centralisation and therefore risk, in order to achieve these economies in node resource use, and that SAFE network might solve them in a way that avoids those compromises (and in fact gives greater decentralisation, based on the security of SAFE closed group concensus rather than bitcoin proof of work concensus).

So I think there are valid questions here. However, I think a better solution is likely to emerge that builds provides the ledger features enjoyed on the blockchain, in some other form on SAFE network directly, rather than as a hybrid as suggested in the OP.