Anyone interested in joining an Autonomi Developer Co-op or startup?

I’ve been working on the side to figure out how I could do Autonomi app developerment full time vs nights and weekends like I do now. With my savings and a little help from my friends, I now have enough cash to live for about 2 years without needing to hold a job. So I’ve decided, screw it, I’m quitting my dead end job and am going all in. I plan to polish Colony and make it as great as it can be, but I have several other ideas on the back burner that I will start working on in parallel.

Developing by yourself isn’t much fun and it is critical to have folks to bounce ideas off of and hash through problems. I’m not sure if the other developers here are of the same mindset, maybe I’m just crazy (I’ve been called worse). I would like to form a kind of Autonomi developer co-op. We would pitch ideas to each other, decide which ones are the most promising in terms of path to revenue, and work together to drive them to completion. Not just developers, but marketing, project management, and UX/UI designer roles as well. Between the various projects on IF and developers in the community, combining forces we could make some absolutely epic applications and get real traction for the network. Together we could possibly get funding from the bamboo garden fund or other sources. Together it will be easier to coordinate with MaidSafe as they develop libraries.

This being crypto, we could simply split the revenue generated among our various addresses. Maybe we form a DAO to handle this. I’m unsure of the best path forward, that would likely be the first discussion. It just seems to me organizing as a group we will be much more effective. Maybe we just form a startup and go the traditional route? No idea, I’m completely new at this. I’m just a man with a vision, determination, and skills to get the job done, hopefully that is enough :rofl:

Is anyone interested in this? If you don’t want to post here, feel free to DM me.

29 Likes

Great to hear this! I’m in a similar boat, as I work a 4 day week for one client, then spend the 5th day doing Autonomi stuff (AntTP, IMIM and anything else that floats my way). EDIT: Although, sometimes that 5th day becomes a me/family/diy day… depends! ha!

The plan is to increase that 1 day to many days as time goes on, but I’m not ready to make a bigger leap yet.

I think some direct funding from the foundation and/or IF type incentives are a good way to fund key software infrustructure/apps. The bamboo garden fund must surely fit into that picture somewhere too.

As the network grows, along with token valuation, then I’m planning on leaning on that to dive in deeper too. That may take a bit of time though! :sweat_smile:

Anyway, good luck with going all in and I’ll certainly be about to bounce ideas off and potentially work together on stuff (where applicable). Same goes for anyone else in a similar boat.

13 Likes

How much there’s left though?

2 Likes

I’m unsure of the current balance. The address has been posted on here a few times in the past. Does someone have that handy?

1 Like

I think there could be a virtuous cycle for AUTONOMI / MAID holders too. Better network → more apps → more token value → more apps → better network… etc.

I was hoping to spend more time on Autonomi earlier in the year, but with the token price tanking, I had to put off those plans. I’m hoping the token will recover with the network and apps starting to grow nicely though.

5 Likes

I’m thinking the same. Once the price gets to a certain level I won’t have to worry about working anyway, then its just all for fun :smile: If we can kickstart this cycle by building useful apps and generating interest, I’m all for it.

7 Likes

All these addresses are empty.

[quote=“maidsafe, post:1, topic:34022”]

The fund has been initiated by a well-respected community member. Should anyone wish to, you can also support this fund by making payments to any of the addresses below, should you be in a position to do so:

Eth : 0x23231a18748bA63de9908331173894f343719a59

Btc : bc1qwkxjx3vd24ansc07jzcrdn99zr9nf8cverav6c

Omni/Maid : 1MLRhWfaMbKb8xxSoHwPgZhEdyxWrDmY7N

[/quote]

Seems I cannot make the format better, but here’s link to the post:

Edit: there’s a new address which has eth

0x525b6642263033584c2b03d7b9e215486EAb2486

3 Likes

Well done for even considering it! And a double well done and good luck if you go for it! ‘Balls in or not at all’ is a valid approach and avoids the whole ‘will they, won’t they’ rigaramole.

I’m facing a similar choice at some point to pursue an idea I have. I can think about it all I like but if I want to go for it there will come a time when I have to spend more time on it than a few hours here and there. That will be a leap of faith and a tough decision and I applaud anyone who even considers it. I’ve seen people wrestle with the decision to start their own business for months.

6 Likes

Yeah, very true.

I think identifying how long your runway is and what the potential revenue streams are is critical.

If you have a flexible employer, or are already self employed, you can slide the time scale, rather than having to go all in or out though. Indeed, angling towarda such a position is a good way to orient towards long term goals.

For me, I’ve more often than not been self employed to one degree or another. Sometimes perm, sometimes small business. If you can enjoy that balancing act, you can be ready to move when opportunity presents.

Ofc, it’s still hard to be ‘1 quarter in’ on the passion thing, while being ‘3 quarters in’ on the money maker, but I’d argue that is what being a professional is all about. You do your best regardless.

Qudos for folks going all in though. It is not only brave, but also a great reflection on the project and this community (warts and all! :sweat_smile:).

10 Likes

It’s a balancing act for sure when you’re a contracted tech person, as one is always keeping one eye scanning for new opportunities, versus juggling the contracted work and the ‘thing’ one really wants to do.

Me personally, I follow the ‘don’t do what Hitler did advice’ (He opened up the fourth front vs the Soviets and we all know how that turned out),

At any given time, I only have max three things going mainstream at once during the week, two is ideal, everything else is back burner ( the other two or three ‘suspects’ of interest, share 1/2 day on the weekend),

the current Third ‘thing’ I spend 20% of the week on has been thinking about how to get canman nicely integrated with Autonomi so as to provide the node operator another revenue stream by renting ephemeral noderunner hosted system containers(LXC) at a minimum, and optionally renting apps+ system containers (LXC), where there is a distributed public marketplace ‘dpm’ on Autonomi enabling container Landlords to advertise and select same as an LXC containerized capability in what is ‘Craig’s List like” web page listing…

The skunkworks of the Third ‘thing’ I have labelled canman for now, definitely relies on other community member stuff anttp, dweb, colony etc to really work and bit more glue and the dpm containerized capability not yet manifested, so part of my lift is figuring out all that stuff can work together to build the dpm.

The LXC provisioning part with incus(LXD) and ansbile I have working on this minimally resourced notebook which is the ‘worst case’ will still work on a gen12 i7 with 16GB RAM and 1 TB hard drive. (street price at the moment CDN $325.00 used and good shape). This system is my daily driver and also runs 8 antnodes generically launched by nodelaunchpad with standard disk space allottment.

The idea is to let the noderunner of such a machine run and or run/rent a couple of LXC containers of modest CPU % and RAM and storage needs, all launched and managed via a modified canman web page attached to the existing WEBUI of incus, running some JS helpers to mouse click/ tell incus what to do via that canman WEB page UI. (The incus std WEBUI is well not very user friendly, that is for sure).

The Third thing is manifested by running a bash make script which installs ‘itself’ as a bash daemon in Linux (currently on the HOS Ubuntu 24.04.3 LTS build). Thereafter the bash script daemon’s ansible playbook function running the tasks is idempotent , tasked to keep the supporting launched incus, ansible playbook, ansible verbose logging daemons and micro web server daemons running in a canman private container(WEBUI, cli api eet al) updated with LTS FOSS.

The ‘dpm’ part of the ‘thing’ however is only a solution architecture documented partly in my gdocs and partly in my paper notebooks and that is about it… so it’s 20% a week for now chipping away at that to standup that part, and alot of reading through what ever docs exist for the FOSS underpiinnings. ;/

So yeah, the co-operative meeting once weekly to share-exchange thoughts on best integration methods within a given ‘thing’ context would be hugely beneficial for everyone and their own ‘thing’.

In my case, the missing capability of my Third thing is dpm, which given the way the solution architecture stands now, most certainly depends on anttp, dweb and colony, along with tying in multiple existing or new wallets spending and accepting $autonomi (and later community/native token when that materializes.)

That said, I am not even sure the ‘skunkworks’ solution architecture is reasonable. So I am hesitant to lift a finger writing/AI generating any code to try and stand ‘Third Thing’ up.

( those ‘wallets’ belonging to the noderunners, the container landlords renting containers or containers+apps and their container or container+app renter/tenants, also likely being of various wallet types) :slight_smile:

So that’s my ‘Third’ Thing and why I am for the co-op.

Specifically I could use the collective guidance of such a co-op to garner insights so as to tune the ‘dpm’ solution architecture into a form where I am confident enough to put the time in to gen the code of the Third Thing (canman) , so I could then stand up a working version on Codeberg to let a few brave souls try it out and do a bit more ‘market learning’ from PoC user feedback, before taking any next steps.

Full disclosure, my other two things are contracted with CloudProx. ( a COO+ Product Manager/Market Researcher set of roles and a PoC Software Appliance Design/build role running in parallel) which suck up currently 80% of my week and, keep the lights turned as I am fixed rate paid with real (rapidly depreciating in value fiat) CDN pesos.

6 Likes

The whole container idea sounds really awesome. I use ansible at home to manage my fleet of computers. I’m no expert, but I can talk the language a little :smiley: . I’ve been looking at the Internet Computer Protocol stuff to do smart contracts on Autonomi apps and its cool, especially the reverse gas model could be really handy, but seems very rigid (need special hardware, be approved to run a node, etc.). Having a market place to spin up containers could be really useful to handle those cases where we need to run some kind of smart contract and enable users to make money to pay for network services. If I have a box in my house that runs 24/7, hosting nodes and running containers, its constantly getting income, while my network usage may only be a few hours a day, so it seems like I should be able to make enough to handle anything a normal user would want to do. If you’re wanting to do something way overboard in terms of compute or data storage, well, you have to pay extra, but that’s to be expected.

So would these containers be just a VM on a single machine? I.E. if this is some kind of mission critical thing I need to make sure completes, is there some redundancy in place? Another question, do I pay for the whole container for myself, or can I rent compute power to run a single script, for example? Basically I’m looking for the ability to pay-per-instruction and get redundancy across multiple machines (like ICP) but without the centralized control over node operations.

5 Likes

The idea is support XOR addressing of the LXC system container (the noderunner selects a container type from a couple of canman QA’d and LTS container OS types ie- Alpine for lightweight, Ubuntu, etc..) by default, and also sets the IP route from the container to either public or private access by virtue of the IP address they select. The four word routing and DNS concept would also be supported at some point.

Alternatively the containers could be pointed inward on the noderunner systems or the noderunner’s local private network using the incus shared ETH0 bridge default setting to run internal LXC system container clusters, possibly running back end apps, supporting front apps running in containers that are either public domain , or rented.

So any variety HA redundancy setup are no problem, given one can choose at their option to place load balancers in small private containers they setup.

To make the above easy, I am working on templates with set use case schema, so one noderunner click will trigger an ansible automated set of tasks to automate the setup of the infrastructure of LXC system containers, with some containers with in that schema having ready to use, pre-configured apps ie twin front end and back end Load balancer ‘LB’ app containers that just work with in the scheme selected.

The Incus(LXD) ‘LB’ running in containers model performs well today in a pure IP addressed container environment , the challenge here is there is really a couple of large noderunner topology schema configurations where having a pure XOR addressed container (facing only toward the Autonomi Network) and a hybrid XOR and IP public(outward ISP) or private(inward local node network) will be required, and that means network testing for reliability and performance (i/o and tail latency), which requires a couple of co-op developers to vet such configs.

Really the pricing part is to use Vultr and DO (Digital Ocean) as a point of comparison for renting just IP public ISP facing system containers.

Where it gets more interesting to ‘spur ‘ upward Autonomi Network growth and use is:

inward facing XOR addressed containers both public XOR and private rented XOR containers, were the apps are rented as XOR containerized Apps paid for in ANT/ $AUTONOMI and when it becomes available NAT (Native Autonomi Token).

2 Likes

Another question, do I pay for the whole container for myself, or can I rent compute power to run a single script, for example? Basically I’m looking for the ability to pay-per-instruction and get redundancy across multiple machines (like ICP) but without the centralized control over node operations.

This is really FaaS the Function as a Service Model, pay for the job compute, memory use over a period of time, which can be setup as a service within the LXC System container, say running Ubuntu 24.04.3, or across a cluster of containers running the same. That would mean deploying a FOSS FaaS front service app which could orchestrate and provision the FaaS job.

The monetization part would be to have that existing FOSS FaaS job management app be integrated by canman to accept payment for the job, and have canman instance schedule the job, time and record use of resources by that job, then both send a report to the job submitter and also restore and make available the OS or OSes used in one or more containers canman had set up for the job.

All doable, blocking and tackling stuff, it just takes time to grind the quality into such a FaaS rent-able app framework.

Brave AI Assisted Search coughed this up on FaaS:

Function as a Service Meaning

Function as a Service (FaaS) is a cloud computing model that allows developers to run small, modular pieces of code, known as functions, in response to specific events without managing the underlying infrastructure. This model, often synonymous with server-less computing, enables developers to focus solely on writing code that delivers business value, as the cloud provider handles server management, scaling, and maintenance. Functions are typically stateless, meaning they do not retain data between invocations, and are designed to execute for a short duration, only activating when triggered by an event such as an HTTP request, a message in a queue, or a scheduled task. Once the function completes its task, the execution environment is shut down, ensuring resources are not wasted and costs are minimized, as users are charged only for the actual execution time of their functions.

AI-generated answer. Please verify critical facts.

🌐

dynatrace.com

What is function as a service (FaaS)?

revenue.io

What is Function as a Service (FaaS)? What is Function as a Service (FaaS)? | Revenue

🌐

techtarget.com

What is Function as a Service (FaaS)? | Definition from TechTarget

🌐

cloudflare.com

What is Function-as-a-Service (FaaS)? | Cloudflare

🌐

glossary.cncf.io

Function as a Service (FaaS) | Cloud Native Glossary

🌐

ionos.com

What is Function as a Service (FaaS)? - IONOS

🌐

geeksforgeeks.org

Function as a Service (Faas) - System Design - GeeksforGeeks

🌐

reply.com

FaaS: Function as a Service | Reply

🌐

ibm.com

What Is Function as a Service (FaaS)? | IBM

🌐

kinsta.com

Function as a Service (FaaS): Everything You Need To Know

🌐

lumigo.io

What is FaaS? - Lumigo

🌐

sumologic.com

Discover function-as-a-service

4 Likes

If it could take form of a co-op, I’m all interested. Currently I don’t have a job, so there’s some free time to use. I hope to get one soon, but for now we can at least exchange ideas and maybe get funding, who knows. I’ve been thinking about such thing for a long time.

As a simplest starting idea, each of us has an IF project going on, so if we join forces, there are 2 persons working together on 2 projects. It helps accountability, motivation and knowledge exchange. More fun also :slight_smile:

7 Likes

Yes, absolutely agree! I think a lot of our stuff can be combined as well to make some really powerful applications.

I’ve got you on the list. So far I’m up to 7 that have shown some interest. I’ll setup a kickoff meeting here in a few weeks after IF finishes.

2 Likes

I like the FaaS idea a lot. At some point we have to tie into something that gives us that distributed compute side of the equation.

2 Likes

That’s fantastic.

I think it could be hugely productive for developers to cooperate closely with each other, plus for people with skills in optimising user experience, product design / management, marketing to chip in.

In the past I’ve toyed with ideas of cooperative type things for general startups, but for Autonomi it makes partícular sense at this time, so I hope this goes well.

I’d like to be involved as well when this gets started following IF.

10 Likes

I think to get the revenue ‘multiple’ working for a solo job in the context of a co-op FaaS service offered by pariticipating Autonomi Network noderunners,

‘it’ means the job owner paying to execute the job, to run in

one noderunner hosted Container

running ( ie canman/incus managed)

or

have a bigger job run across a cluster of containers running the a bigger multi-core/multi thread job

where in the latter case

other containers could be requested and obtained for short use from other noderunner operators also participating in what is a co-op FaaS service ,

the job owner paying and uploading the job to one or more containers should also have the option to store/upload the result or data set/information created to Autonomi Network and,

of course download/receive the job result

from some in memory temporary result container

(or set of containers if the output of the job is big, -ie generating a synthetic data set, creating a movie from a job script, etc, )

is all Very doable, all of it built on top of the existing Autonomi Network XOR addressed framework,

Imo the real value add of such a co-op FaaS Service dev effort would be to:

create a job uploader

which makes use of the existing Autonomi Network encryption

to re-use the quote system from one or in this case a FaaS co-op quorum of node runners that have joined a ‘service group’

where these members of the FaaS co-op service group set/up, reserve and publish the availability of

one or more containers to a FaaS service type (there will be variations of FaaS)

where the container capabilities are ‘marketed’ by the noderunner

ie- FaaS_member__systype__container_(ie compute, temp_store nvme, temp_cache_DDR5, etc..) ,

CPUcycle%reserved,

, so much memory and

perhaps a certain amount of local ephemeral storage per the storage or cache type specified and offered by the node runner.

n.b-canman design currently has sqlite specified, so the noderunner could use that to setup their participating offer in such a FaaS and then publish (say use FOSS NATS pub/sub broker which can run in a private canman/incus orchestrated container) to publish the noderunner’s local marketplace page to the other currently participating (they will come and go) noderunners of the ‘service group’ so these service group noderunner participants can add such a new ‘container available’ to rent of type such and such’ listing to their own copy of the marketplace page, something simple like that.

canman is designed to run a python flask web server, which is lightweight, to handle such page display, web server url pages, the latter which imo really should be found by anyone in the Autonomi Network, with the DNS of ‘four word’ network addressing, that @dirvine is working on., so anyone in the network can check to see what container resources are available at any time and what the price of rental offerings look like (and for how long, sort of the like Air BnB :slight_smile: )

Then once the compute ‘FaaS’ job is complete,

the job uploader’s regular Autonomi Network Close Group, now sees the uploader owning and paying for the job optionally asking to get a quote to permanently store the results of the job just completed, a result which may be stored in one or more result (memory-cache or ephemeral disk volume) storage containers, which have will have also upfront be advertised and have a different quote price offer (it’s mainly just temporary storage, so the quote would be lower…

So the job owner will need to first peruse then select what is available from the html page manually, and or pro grammatically (hence the need for a FaaS Job uploader variant of DAVE ?)

the other thing to add in, to the whole co-op FaaS workflow is ‘observability’, that is state data captured and recorded over time and placed in job timestamped logs, so container performance can be captured, and later used to generate noderunner ‘container landlord’ ratings/reputation ( fast//slow, operated as advertised, no crash events, etc..)

So a story something like the above is a place to start, me thinks,

for such a magical co-op FaaS Service dream :wink:

co-operating on what would be a group project and parceling out the different types of work within the co-op, given member availability to parts of the work,

is really a co-op member product manager type of job,

then you have the lead service architect/framewrk developer,

and then specific capability developers,

the QA of such a co-op Service Build Effort like co-op FaaS,

imo should also have each member offer up some container capacity,

so as to to run a distributed automated test framework,

setup something like what I used to use in the IBM STAF “System Test and Automation framework (we set this up back in the day @ Surfkitchen in CH, hosted largely (ironically) in NA on VPSLand servers in TX in the mid ‘00s, running it in Multiple ESX VMs, some of the testing running via STAX agents in VMs hosted on developer and QA engineer desktops to run distributed testing at night, (Our little test team called our STAF/STAX implementation the “Octopus” :slight_smile: )

IBM STAF is still FOSS and out there for baseline reference, for anyone interested. Works alot like the old distributed.net project that SETI used to run.

fyi- There are emerging lightweight tools out there like dstack aimed at AI jobs as well that definitely offer some baseline reference for such a FaaS co-op service concept as to how ‘it’ should, and should not be done. :slight_smile:

3 Likes

I can’t say I understand all of it, but from what I can gather I think we’re on the same page :laughing: We’ll need to hash through the details and come up with a solid plan and how it integrates with everything else, but it looks like a lot of the pieces are all there, they just have to be stitched together. Like anything else I suppose.

2 Likes

Other communities are using the RFC IETF like format to get things started….

to get a co-op community approval of such an RFC submission process working, which to start needs to be one well formed story as the first submission (grok can shape up my ramblings into such a form), there needs to be first a working submission setup with a voting system like Gov4git, Both can be set up on github.

(ideally I would personally like to see this work on Codeberg, because I can’t stand MS sitting there with a censor mallet ready to wack you when ever they want ;/ )

Specifically re:gov4git I have done quite a bit of recent research to see how to adapt it to regular public voting models. here is the public link on that early work focused on fixing the Canadian Voting System to be more transparent and inclusive (and also be Quantum attack proof).

https://grok.com/share/c2hhcmQtMg%3D%3D_698d87f5-5f83-4a21-bd08-3e32f4de8270

Gov4Git has quadratic Voting which keeps the minority position from being marginalized by mob democracy, so its a really good model imo to use for RFC submissions to the co-op.

Cosmos Network is an early baseline example of what and what not to do. I have been following ATOM/Cosmos Network for awhile and met some of their core developers a few years ago, we alsohelped arrange some hosting of one core Cosmos Network developer group’s validator nodes just west of Toronto, and as a result got to see first hand in real time how the Cosmos RFC Proposal submission system worked and was actually utilized by members of their community in real time, as they both responded to submissions and actually implemented approved works collaboratively built by member companies of their community. The UI (mostly cli) was Klunky, but it did work.

3 Likes