I think to get the revenue ‘multiple’ working for a solo job in the context of a co-op FaaS service offered by pariticipating Autonomi Network noderunners,
‘it’ means the job owner paying to execute the job, to run in
one noderunner hosted Container
running ( ie canman/incus managed)
or
have a bigger job run across a cluster of containers running the a bigger multi-core/multi thread job
where in the latter case
other containers could be requested and obtained for short use from other noderunner operators also participating in what is a co-op FaaS service ,
the job owner paying and uploading the job to one or more containers should also have the option to store/upload the result or data set/information created to Autonomi Network and,
of course download/receive the job result
from some in memory temporary result container
(or set of containers if the output of the job is big, -ie generating a synthetic data set, creating a movie from a job script, etc, )
is all Very doable, all of it built on top of the existing Autonomi Network XOR addressed framework,
Imo the real value add of such a co-op FaaS Service dev effort would be to:
create a job uploader
which makes use of the existing Autonomi Network encryption
to re-use the quote system from one or in this case a FaaS co-op quorum of node runners that have joined a ‘service group’
where these members of the FaaS co-op service group set/up, reserve and publish the availability of
one or more containers to a FaaS service type (there will be variations of FaaS)
where the container capabilities are ‘marketed’ by the noderunner
ie- FaaS_member__systype__container_(ie compute, temp_store nvme, temp_cache_DDR5, etc..) ,
CPUcycle%reserved,
, so much memory and
perhaps a certain amount of local ephemeral storage per the storage or cache type specified and offered by the node runner.
n.b-canman design currently has sqlite specified, so the noderunner could use that to setup their participating offer in such a FaaS and then publish (say use FOSS NATS pub/sub broker which can run in a private canman/incus orchestrated container) to publish the noderunner’s local marketplace page to the other currently participating (they will come and go) noderunners of the ‘service group’ so these service group noderunner participants can add such a new ‘container available’ to rent of type such and such’ listing to their own copy of the marketplace page, something simple like that.
canman is designed to run a python flask web server, which is lightweight, to handle such page display, web server url pages, the latter which imo really should be found by anyone in the Autonomi Network, with the DNS of ‘four word’ network addressing, that @dirvine is working on., so anyone in the network can check to see what container resources are available at any time and what the price of rental offerings look like (and for how long, sort of the like Air BnB
)
Then once the compute ‘FaaS’ job is complete,
the job uploader’s regular Autonomi Network Close Group, now sees the uploader owning and paying for the job optionally asking to get a quote to permanently store the results of the job just completed, a result which may be stored in one or more result (memory-cache or ephemeral disk volume) storage containers, which have will have also upfront be advertised and have a different quote price offer (it’s mainly just temporary storage, so the quote would be lower…
So the job owner will need to first peruse then select what is available from the html page manually, and or pro grammatically (hence the need for a FaaS Job uploader variant of DAVE ?)
the other thing to add in, to the whole co-op FaaS workflow is ‘observability’, that is state data captured and recorded over time and placed in job timestamped logs, so container performance can be captured, and later used to generate noderunner ‘container landlord’ ratings/reputation ( fast//slow, operated as advertised, no crash events, etc..)
So a story something like the above is a place to start, me thinks,
for such a magical co-op FaaS Service dream 
co-operating on what would be a group project and parceling out the different types of work within the co-op, given member availability to parts of the work,
is really a co-op member product manager type of job,
then you have the lead service architect/framewrk developer,
and then specific capability developers,
the QA of such a co-op Service Build Effort like co-op FaaS,
imo should also have each member offer up some container capacity,
so as to to run a distributed automated test framework,
setup something like what I used to use in the IBM STAF “System Test and Automation framework (we set this up back in the day @ Surfkitchen in CH, hosted largely (ironically) in NA on VPSLand servers in TX in the mid ‘00s, running it in Multiple ESX VMs, some of the testing running via STAX agents in VMs hosted on developer and QA engineer desktops to run distributed testing at night, (Our little test team called our STAF/STAX implementation the “Octopus”
)
IBM STAF is still FOSS and out there for baseline reference, for anyone interested. Works alot like the old distributed.net project that SETI used to run.
fyi- There are emerging lightweight tools out there like dstack aimed at AI jobs as well that definitely offer some baseline reference for such a FaaS co-op service concept as to how ‘it’ should, and should not be done. 