// Example of a simple SAFE Network Facebook-like app
use safe_app::SafeApp;
use safe_authenticator::SafeAuthenticator;
use safe_authenticator::auth::{authenticator_from_response, SafeAuthReq};
use safe_authenticator::ffi::IPCReq;
use safe_network::Safe;
struct FacebookApp {
safe_app: SafeApp,
}
impl FacebookApp {
pub fn new() -> Self {
let safe_app = SafeApp::new();
Self { safe_app }
}
// Method to create a post on the SAFE Network
pub fn create_post(&self, user_id: &str, content: &str) {
// Implement logic to publish the post on the SAFE Network
// This would involve interacting with the SAFE API to store and retrieve data
// using the appropriate data types and structures provided by the SAFE API.
// The exact implementation details depend on the SAFE API version and features.
// For simplicity, we'll use placeholder code to demonstrate the concept.
let post_data = format!("User {}: {}", user_id, content);
let post_address = self.safe_app.store_data(&post_data);
println!("Post created! Stored at: {:?}", post_address);
}
}
fn main() {
// Initialize SAFE Network Facebook-like app
let facebook_app = FacebookApp::new();
// Create a sample post
facebook_app.create_post("user123", "Hello, SAFE Network!");
// You would implement other features such as user authentication, comments, likes, etc.
}
But seriously though…
If a chap decided on a budget of $60/month for geek-type subs, how would you allocate this?
$20/month on GPT Plus, $20 for cursor.sh and the rest in Hetzner/AWS/Digital Ocean for testnets or something else?
Cursor.sh is enough then publish the app on SAFE with an xor_url (name) the name can just hold the data_map for the app, so a register (to allow updates) and the rest of the app is chunks.
If there is a testnet running then it’s worth playing this game and see how far we can take it. I would advise baby steps, i.e.
File uploader/downloader
Get recursive
Wallet app
Multisig file sharing
… lots of things
and then move on to twitter/facebook etc. I think though we can do uch better than those
I don’t think delaying safe for optimising any ai integration would be reasonable… but I do use LLMs in my daily programming excessively by now too … And I do see insane potential there too…
As David hinted… File sync, multisig / cryptography stuff, chat, information broadcast / social networks and combination of this is all basically a well understood programming task and LLMs should do an awesome job supporting their implementation on safe… And even the creation of new things that can only exist on safe…
… Awesome stuff for after launch ahead I’d think
I’m very much in favour of seeing what these tools can do, but remain very skeptical of the claims being made, for example that you could hand over the implementation of something you specify as Facebook and quickly create an equivalent app to run on SN.
To create that on Safe Network is a complex task requiring a lot of understanding of the capabilities you want, and how to map these onto this novel architecture even before you design a solution.
I don’t doubt that you can use these tools to write code, but whether or not you get useful code, a useful application etc is going to depend a lot on the app’s complexity and the suitability of the architecture for the functionality required.
It’s like suggesting you could give an LLM the Safe Network white paper and expect it to deliver useful solutions to all the things that have kept MaidSafe busy for so long. In practice you cannot avoid the work unless you are implementing something that’s not too different from things that are well understood and have already been implemented on a similar infrastructure.
Despite my skepticism I am keen to see these ideas tested. If it’s as easy as is being suggested, we shall I expect see these apps beginning to emerge, well pretty much right now.
Yes, I looked a while back, but our API was/is not ready and it’s a bit of work to get it though. I am not comfy submitting an API that will change or is incomplete as of yet.
Fine tuning is great, but I’d like to see these models become more discrete and not just bigger and bigger one size fits all. Then updating them becomes much easier as well - not having to download so much at once.
I don’t know how this might be accomplished, but I suspect it’s possible and maybe some are working on it.
Being able to mix and match smaller discrete LLM’s to suit seems/feels like a better approach to me.
RTM seems to be more with physical products being developed.
With s/w products that is more listing it on stores. But with the current trend to release beta to stores the RTM step is not really used since it was done progressively from beta.
Yes I should have cut that whole blue section. Was just the first development cycle google handed me. If i read it correctly they use RTM as release to marketing which still doesn’t fit with us as I think marketing starts with beta?
Before online stores and the like software had to be marketed in the physical realm and packaged into display boxes and media pressed for distribution.
Its in recent years this has reduced a lot for s/w
You should check out what Tau.net is doing. Logical AI and not Generative AI for software developing through natural language. You don’t even need to be a coder. They are on the cusp of testnet and another project that’s been going on as long as Safe. If this was live in production, how much faster would Safe be developed in utilizing it?