I don’t think you’d need to go that far. It seems like the network might not be a million miles away from what I’m talking about as it stands.
Say sites are written in JS then you may want some bits to execute on the client and other bits to execute on other machines. There could be various reasons for wanting to do this; security (because you don’t want the client to tamper with calls), performance (in certain areas), etc.
From the post I’ve referenced above it seems developers can call through to centralised servers if they need to - which helps with certain security issues.
One way I could see this working without any centralisation would be for JS code to be stored within the network, just as files are. You could place a request with the network to execute some function and then you’d get the result. This function would execute within a JS engine on some node on the network that was signed up to the “Allow 3rd Party JS execution” contract. The node that executes the function would be paid by the caller.
There’d obviously be questions around security here. One of them being how do you prevent nodes that are signed up with the “Allow 3rd Party JS Execution” contract from abusing their position. This is where trusted computing could come in but there may be different options.
In terms of performance, of course if this type of thing was used without much thought (on the end developers part) then performance would be terrible. However it could be used in ways which actually improve performance. Imagine you have some house-keeping operation that needs to be performed after an order is taken. You could just fire off an async request into the network to perform this house-keeping op and it’ll get done at some point, we probably don’t care if it’s done immediatly but we can carry on with other work at the same time.