So this would be censored network with all data unencrypted and public? What advantages would it have over a current design?
We don’t use Hardware raid. we actually use cheap DC grade NMVE SSDs, because our LKM runs FTL File Translation Layer in Memory. We don’t deploy RAID 10, its too expensive and wasteful, we typically deploy RAID5 and RAID6 in Software because its faster than standard Hardware and also much faster than std Linux software raid.
The problem with RAID hardware solutions these days is expense, not so much on the first hardware buy , its because of the recurring service/support which is 22-25% of the purchase price annually for 5 years, which includes media replacement, which is needed with these solutions in OLTP setups which are 4K block size 50/50 r/w heavy use IOPs/s use cases… ,
even at 70/30 r/w wear comes fast on SSDs, especially with ZFS because of excessive Write Amplification, so the vendor /Integrator reseller does advance replacement of drives and hope your actual read/write is only 80/20 so they make a lot of money on support (They don’t have to roll trucks, send a service firm/person to replace SSD media).
Hardware Vendors also routinely sell the buyer up on de-duplication software features in their hardware, especially to those accounts using ZFS because the Write AMP is so bad and you need de-dupe to get back a lot of wasted/stranded space otherwise.
We know that fully priced Hardware RAID Solutions over 5 years are typically 35X to 40X NVME SSD media cost after your discount. AWS is 50X NVME SSD media cost given their io1 pricing for fast storage.
Choice of a DAS Direct Attach Storage File System is important, ie- Oracle will try to sell you their ZFS solution as will others. NFS+/4 is not much better when cones to write amp, they disguise performance with in memory cache for the first 15 min till the cache fills up, that said if your traffic is light, then NFS will work. In my books ext4 is still a better choice and dynamically expandable for Linux.
btrfs is also expandable with less write amp than NFS, bcacheFS is also quiet fast with modest write amp. xfs +ROCE RDMA is (or iWARP for windows) is quiet fast over the SAN and typically what we deploy in SAN scenarios.
I hope this helps.
The AU government forbids their databases to be stored off AU soil. Imagine the large number of computers sitting on every desk in the government departments all running one or more nodes with a Federal Government private network. They cannot use Autonomi since it violates the rule of where the data is stored.
Node size could be increased to better match the SOE computers being used thus reduce overheads of connections. Say 10 nodes per computer is millions of nodes, plenty big enough to be viable.
The savings on data centre costs and backup solutions. Backups can be reduced by 90% since the network is its own backup and if the replication count is increased then its even better.
That is one type of fork I can see happening.
Library network between all libraries could save a lot on storage costs, and built in special authentication could allow lending times to be adhered to and help prevent abuse.
I just realized it should go back to supporters instead!
a) you can choose your group - and rules
b) free unused resource sharing beats centralized server with payments (similar to torrents)
c) no IP logging (network users and moderators can remain anonymous if they want)
d) no crypto nonsense
I don’t envision a replacement, I want the network with native token too, but an alternative for people who want to share stuff within groups of specific interests without the involvement of crypto (which limits ability to post by cost and usability) could be implemented sooner and with less code and complexity, for those who want to experiment.
IOTA is the one to look at for studying how a DAG should work, they have PoW settlement speed, I followed that project for years, then IOTA made a hard left turn away from IoT into Smart Contract land a few years ago … big mistake. Still their DAG tech is advanced and mature, and fast.
IOTA is a SUI fork (ditched their old tech for the most part)
It’s still active, the SUI part runs parallel, that said I dropped IOTA about four years ago after the core team broke up and they started talking Shimmer and Smart Contracts and PoS, they were pivoting into a space that did not match their skill sets way too late. There are a core group of D companies and others that are keeping the original DAG IoT network alive, though they don’t promote it much. Bosch is one of them. Plus there is a WEF odor. grok.com take on the current DAG+PoW projects being maintained, developed or in production
Week 2 summary
Hello. Last week was mostly unproductive, since I was helping my friend on his house construction site. As the project funding is mostly on finish, this seems not to be problematic.
Thanks to generous donor’s $700 (yes, this seems to be a single person), enough work can be done to prepare project for Impossible Futures. After running out of these funds, I’ll work unfunded for a while as my personal contribution, until a working POC suitable for IF voting is ready.
If you think, that what is going on here has value, please consider supporting the project further, so it could achieve working state. It is not only IF-oriented project, so even if we don’t make it to top 12, we can continue development. My funding support address on Arbitrum is 0x708eEEC1126cC1Df9A7B60A890aD932360B6C46a . It’s also working on an Ethereum network. Thanks!
Also, I finished IF project page draft, I hope you like it .
All the
are bitcoin, Ethereum etc privatekeys…
That’s a coincidence. All just a 32-byte hex string, but completely different meanings. You could use a xorname as a privkey, but why would you do that?
I have a feeling that creating a native token on autonomi is a very highly difficult thing to achieve. I have a feeling that there are not many people in the world who have the capability to make it happen. As much as i want to believe that the community can make it happen, I have a feeling that it needs to be a David Irvine or other math/economics/theoretical physicist or someone similar to make it happen.
David himself had an opinion, that it could be anyone:
Week 3 summary
This was the week of Impossible Futures program focus. I prepared a project page, and updated documentation. As voting has shown, until interrupted, the project has a huge support of community, which I’m grateful for. I’m convinced, that we’ll make it to next phase.
Unfortunately, funds are finished, so I’ll now share my focus with other projects, that I’m doing in my spare time. Probably also some paid job will be on a horizon, but I’ll try to do my best to keep going with Community Token. Until, of course, I get funded again.
I’ve finished a basic operation flow: token creation, balance check and publishing a transaction. This needs refactoring, and I’ll also try to create a diagram of this process.
What I can, though, is share debug output of the process:
EVM Address: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266
ANT Balance: 2499999999999999999999964
Gas Balance: 9999997393213964937596
Issuer BLS SecretKey: 5202(...)
Issuer Derived PublicKey: a7f0(...), PublicKey(07f0..f5f7)
TokenInfo Chunk: f54d82d5c39533fd64addcb945b23d5d4be634d33f41af18fbf4ab342d9ed7f5
Genesis GraphEntry: b088e9dabeaa29990b287faba5eac60ad89bdd61dde054ec0eac4fca4e0f53e4c9987bc43f7109fd7d19d733e47877c6
Wallet: {DerivationIndex([1]): [(PublicKey(1088..6286), 1000000000000000000000000)]}
ACT Token issuer Balance: 1000000000000000000000000
Receiver BLS SecretKey: 2ce2(...)
Receiver Derived PublicKey: b413(...), PublicKey(1413..7335)
Wallet: {DerivationIndex([2]): [], DerivationIndex([1]): [(PublicKey(1088..6286), 1000000000000000000000000)]}
Rest BLS SecretKey: 3017(...)
Rest Derived PublicKey: 896e(...), PublicKey(096e..1a4a)
Wallet: {DerivationIndex([2]): [], DerivationIndex([3]): [], DerivationIndex([1]): [(PublicKey(1088..6286), 1000000000000000000000000)]}
Ipnut: (PublicKey(1088..6286), 1000000000000000000000000)
Inputs: ([PublicKey(1088..6286), PublicKey(1088..6286)], 1000000000000000000000000, false)
Spend GraphEntry: a7f03ccc59dba0af0951217ffbe45f751403879077c94bc04fd01ae1acf49747460f3883d13e50acc0dec085f2305884
Wallet: {DerivationIndex([2]): [(PublicKey(07f0..f5f7), 200000000000000000000)], DerivationIndex([3]): [(PublicKey(07f0..f5f7), 999800000000000000000000)]}
It is, but without it, the job is simply not finished. A native, non-blockchain token as a means of paying for storage was a crucial part of the network design and the very core of its indepedence. Whether I like it or not, until a native token is live, the original vision has not been achieved.
Hence I’m voting for this one. Can loziniak do it by himself? I don’t really care. He kickstarts it, maybe discovers some dead ends or possibilities to (not) build upon. I don’t care who starts/finishes it. I just believe loziniak will give it an honest shot and he will document the work well enough for the others to take heed. Good enough for me.
- Bridge smart contract to burn tokens on Arbitrum side, with option to updgrade for 2-way operation
The token burn in Arbitrum is, perhaps, what concerns me about this design. Wouldn’t it be much better to freeze those ANT, even if it meant a bit more work? I believe an ANT burn could greatly hinder a future 2-way operation.
Yes, the tokens will be sent to the contract’s address, and the contract will be upgradable. So when time comes, the code will change and tokens could be unlocked on ARB side, and locked on ACT/NT side. How locking of ACT would be done remains to be discovered
Also, probably it would be nice if more people than me kept keys for updating the contract, so I’m thinking about some multiparty Safe (the Gnosis one ) scheme, like 3 of 5 for example.
Week 4 summary
While the IF program phase 1 voting is almost complete, ACT is moving forward behind the scenes. Code is undergoing a massive refactoring together with latest workflow (token creation, see balance, make transaction) being implemented in a test wallet GUI.
Thanks for your precious support in IF program. The best reward for me was seeing the project on 1st place of leaderboard for a day or two. I’ll never forget that. I’m hoping we’ll get into first 12, but even if not, the work is worth doing.
I’m hopefully looking into phase 2 – backing, because it’s when people don’t fear of losing their money when giving support. Although I value mostly genuine involvement, which cannot be measured by money investment. The statistic, that I like to watch is IF projects by likes received, although our project has unfair advantage of early start there.
As you probably noticed, daily updates ceased, as there’s less going on and in fact there’s not much to add to the weekly ones Also, as I spend less time on this and don’t use someone else’s money, I feel it’s better to concentrate on code.