The inner workings - Overview
Storage architecture
Event sourcing
Being an event sourced
system, every change to the filesystem is recorded as an event.
These are encrypted and stored in a local SQLite
database, as an append only log a.k.a. WAL
(write ahead log), and subsequently, in a transaction, applied to an in-memory representation of a filesystem. This means that as you work with data on the drive, read and write it, it will be manifested in the in-memory filesystem. This also leads to some limitations that we will cover further down.
One of the reasons for storing into a WAL
, and building the current state as an in-memory filesystem, is to get minimal latency when working with the network drive; the aim is to make it feel as snappy as if it was a local drive. Another benefit of this WAL
approach, is that after initial connect, you can be offline without noticing any difference, and your changes will be synched to the network as soon as connection is back.
The event sourcing, and the perpetuity of the data in the network, also means that you would be able to reconstruct your drive as it has looked at any point in history, by just replaying the events, change by change - i.e. restore it to a previous version.
Event synchronization
A background job is detecting activity on the drive, and as soon as you leave it idle for a few moments, it starts synchronizing the events and the content to SAFENetwork
.
If your machine was to go down, you wonāt risk losing any changes, as the WAL
is kept encrypted locally, and will continue synchronizing to the network on next start.
The events are stored into StreamADs
(appendable data) of the SAFE.AppendOnlyDb project recently presented.
If the written content of a file is larger than what fits into a slot in a StreamAD
, it will instead be stored as immutable data
, and the datamap stored to the StreamAD
.
The SAFE.AppendOnlyDb
is an infinitely expanding data structure, which uses common indexing techniques to allow you to get good access times of your data, even as it grows very large.
As you use a drive, events are produced, and a history of all the changes builds up. Any time you connect to the network, from any device, you will download this log - without the actual data - and build up the folder and file hierarchy locally. Using a technique called snapshotting
, this will be a rather small amount of data and fast download, regardless of how long time and how many changes you have applied to your drive. It would take a very large folder tree with a huge number of files, to make this initial synchronization notably slow. (But naturally, the limit to how large this folder hierarchy can be - without the actual file content remember - will be bound by how much working memory your machine has.)
The actual content of files are downloaded on demand as you access it, whereby it is cached in-memory while you use it. (Cache eviction is still on todo-list.)
Merge conflicts
You might be guessing by now, that by choosing this strategy, we have traded speed for complexity, because when the WAL is asynchronously uploaded to SAFENetwork, any changes you (or a team mate, family member, etc. etc.) might have incurred on the same drive from another device, might lead to a conflict, which isnāt detected until only after you have happily continued your work as if the changes went through just fine.
This is a big area which will probably need most of my focus from now on. First of all identifying all compatible changes that can be merged automatically. Second of all, identify and implement a strategy for dealing with other conflicting changes that cannot be automatically merged. This is not a new problem, on the contrary, it is a quite common problem today. So there will be plenty of resources to dig through, and then see how it most sanely can be applied in this situation.
Drive data handling
As a LocalEvent
is produced, it is encrypted into a WAL entry, which is stored in a local db file (one per drive). Asynchronously, this log will be worked down and uploaded to SAFENetwork, in the form of a NetworkEvent
. Unless you shut down the application before the last entry has been synched to the network, all local drive data will be wiped as the application is shut down.
Security and configuration data
There is currently a convenience approach to this, and there are rooms for improvement.
You create a user on your machine, by providing a username and a password. The password will be used to encrypt the user and drive configuration that you store locally on the machine, as well as the data in the local WAL db.
In your encrypted configuration file (one per user), you will store the SAFENetwork credentials to each drive.
Itās certainly possible to go about this in some other way, for example use several drives per SAFENetwork account, or not store the network credentials locally, etc. Iām fully open to ideas and requests, as to craft the solution that is most desirable for personal or collaborative use.
Performance
My initial experience with this alpha version, is that it actually does feel very snappy, thanks to basically being an in-memory drive. The write throughput to the network will primarily be restricted by your upload bandwidth, and secondarily CPU and local implementation details, which I hope are sufficiently optimized for practical usage, but could surely be improved upon otherwise. This is also something we will find out in better detail as it is being used.
Limitations
The SQLite
local database for intermediate storage of WAL
entries, has a limit of 1 Gb
per row. I have currently not implemented splitting of larger files into multiple rows, so for now it can only handle files up to 1 Gb
. But this is a priority feature, so it will soon be able to take larger files than that.
Being an in-memory filesystem also presents some challenges and limitations to how large files can be worked on at any given time. The available RAM will restrict how large files can be handled at any time, and currently also how large proportion of the filesystem you have accessed during a session, since there is currently no cache eviction. This is also a priority feature, as to not put a limit on how much content that can be accessed during a session.
Also, it is currently only working with MockNetwork (which is storing SAFENetwork data to a local file). Naturally it will eventually be possible to configure which (real) network to connect to.
Cloud drive and file system framework
I was able to find a cloud drive abstraction, and a good implementation of IDokanOperations
, with related tests, that I could use as a base. CloudFS
project: GitHub - viciousviper/CloudFS: The CloudFS library is a collection of .NET assemblies as gateways to various publicly accessible Cloud storage services. and GitHub - viciousviper/DokanCloudFS: A virtual filesystem for various publicly accessible Cloud storage services on the Microsoft Windows platform.. It is virtual drives over various cloud storage providers, and was an excellent template for my work. It is much more generic than I had need for, as it is supposed to be able to handle any additional implementations of cloud storage providers. Iām just interested in one
I have refactored this mentioned code, and used it in some new ways. It sits on top of the storage architecture described in previous section (the event sourcing, WAL synchronization, local current state as in-memory filesystem, etc.). I have cleaned up a lot of unused functionality and updated the code base to fit with the newest C# code features and my personal coding style. Thereās still a few parts of unused code to clean up. I can probably also do some architecture improvements and simplifications, since it was written to be very generic, and SAFE.NetworkDrive does not aim to be generic. Thereās no other storage provider needed when you have SAFENetwork
Deeper dive
Iāll post this for now. It would probably be nice to go even further into the implementation details, with code examples, as well as some visual representations, but Iāll do that in another post in that case.