I reckon they’ll get all the paperwork updated once the sprint to beta MVP is done. As we’ve seen over the years, too much can shift about and then more and more rewrites taking time away from development.
My simple advice is to be patient - we’ve waited this long, a bit of wait for full documentation won’t sink us.
@bzee, the IGD feature of Libp2p, is it implemented in Safe Network?
I was tinkering with port forwarding when I remembered, that my router has IGD property, and would be fun to test it. Is there something to do for a test, or is (“testing” ) = (just no port forwarding and see if it works)?
Maybe its time to drag this topic out of the history again?
AI has moved on a lot since the beginning of the year. Anyway if the lack of documentation or API bugs you , have a quick look.
Here is something to consider as a revenue generator in the application layer anchored on storage rental , renting shared composable memory & cpu cycles per workload ala the old distributed.net that served SETI that is use CXL, this is what gen AI will feed on Memory to store LLMs of many different flavours to feed the chat AI client queries, then this jumps into the picture, AI Governance, and how to implement an AI LEASH on all of what is coming, to ensure the AI LLM sources can be trusted
From my understanding an account will have its special register holding the datamaps, etc and is updatable.
If that is correct then is there any protection if the user uses an infected machine which proceeds to encrypt the account register and demands the user pay ?xyz? to unlock the encryption.
Are previous versions of that register kept? For instance is it really an appendable register?
Or am I misunderstanding this account register all wrong. If not then this could be focal point for attackers and malware.
The register data type is versioned so I think the personal Safe can be designed in a way that the user will always be able to access it, but let’s see what the design looks like.
AI/ML data source White lists controlled by the end user, are what is missing. These ML processes training LLMs to improve GenAI prompt results don’t allow for it. You are left to the data source selections made by others with their own agenda. This is the AI missing LEASH. Add this capability into the SafeNetwork creates a big differentiation. I can Whitelist and Black list which adds I see in the Brave Browser, I need the same capability for LLMs feeding results to genAI prompts. The Google Gemini gaff proved the need for such White/Black listing of data sources, which will force competing genAI+ LLMs to enable the user to source/subscribe to different MLs tapped into white listed data sources feeding those same LLMs.
My wishlist is that the account data cannot be deleted and only updated. This prevents any malware on a machine locking up a users account and demanding a ransom to unlock. (Ransomware) As well of course just destroying the account. That machine could be in a library or other community machine. And of course could be a mischievous friend/sibling/etc who wants to mess with the person’s account while they are away from their PC.
Only exception that I could see if the user wanted to remove something and that’d require the user’s passphrase to be entered of course.
But maybe a better method to removing something would be to have functionality for a user to “Migrate” their account over to a new account and be able to select what to migrate and what not to. That solves the need to delete and allow selective removal of history when transferring to new/other account. I can see that being important to many people who wish to pivot their lives and/or have a second account that is a subset of their main or original account.