Decentralized VR on SAFE would be very interesting.
High fidelity is distributed, but not really decentralized. It’s a bit like web servers for VR, though they have some nice parts for making it distributed.
If we want an anonymous, distributed metaverse built on SAFE, I think it would have to be built from the ground up, although it’s possible that parts of high fidelity could be used.
SAFE for VR/AR is in ways perhaps even more useful than for the web. AR is the next big platform after smartphones and the tech giants Facebook, Google, Apple, Microsoft are investing billions to try be the leaders.
The investments into AR and VR are great and it has a lot of potential, but there’s also some downsides. Both AR and VR needs eye tracking to be really useful, for foveated rendering, user interaction and much more. It will be really awesome, but it also means that Google or Facebook will be able to track your every eye movement, see how long your eyes linger over an ad, see if your pupils indicate you find an ad interesting etc. This is really great for advertisers, ads can be much more personalized and it will be easy to see if users find ads interesting, where to place ads to get the most visibility and so on. Obviously there’s some privacy concerns here, your eye movement might give away more of your thoughts than what you’d be comfortable with Google and Facebook knowing. Apart from eye tracking there’s the whole issue of the new digital world being controlled by a couple large corporations. One thing is that Google and Apple own the smartphone app stores, another thing is if they own similar app stores for AR apps. As AR technology improves, light-field displays will eventually get to the point where digital objects can look completely photorealistic and when you’re wearing AR glasses, you won’t be able to tell the difference between real physical objects and rendered digital object. The color and texture of actual physical object can also be changed to look like something completely different. This means Google, Facebook and Apple won’t just control what you read online, but what you see and hear when you’re wearing AR glasses, which in some years will probably be several hours a day.
Do you want your reality controlled by Facebook? If you’re on this forum, the answer to that question is most likely no. One problem with high fidelity is that as it’s not really decentralized, it will likely be bought by one of the giants if they’re successful, my bet is on Facebook.
Let’s have a look at how high fidelity works and how something like it might fit into SAFE. (Can’t guarantee I’ve understood everything correctly about high fidelity though, my understanding is based on some rather light documentation on their website)
High fidelity (will) run a set of centralized servers for certain core services, namely authentication, place names ( like DNS), discovery (like a centralized search engine I guess), the assignment server (a marketplace for selling computing power for running virtual worlds) and marketplace. As you can see, these are things that would be perfect to run decentralized on SAFE. Safe already has decentralized authentication and user profiles built into the network, same with DNS. A decentralized marketplace is still missing, but is one of the most obvious applications to build once the network is up. Then there’s the search and discovery part, building that would go hand in hand with building search engines for other SAFE content. High fidelity is planning some kind of currency, like in Second Life. I can’t find any up to date information, but it looks like they’re considering either making their own blockchain or making a centralized currency like they had in Second Life. In these ICO hype times perhaps they’ll make an Ethereum ERC20 token, who knows.
The assignment server is rather interesting. It is kinda what SAFE will do with farming nodes and to assign certain high performance nodes to become archive nodes, but with a mix of a computing marketplace kinda like the Golem Project on Ethereum. One big difference is that in high fidelity it’s a centralized service.
The Assignment Server is a High Fidelity service that allows people to share their computers with each other to act as servers or as scripted interactive content. Devices register with the assignment server as being available for work, and the assignment server delegates them to domain servers that want to use them. Units of a cryptocurrency, will be exchanged by users of the assignment server, to compensate each other for their use of each other’s devices. The assignment server can analyze the bandwidth, latency, and Network Address Translation capabilities of the contributed devices to best assign them to jobs. So, for example, an iPhone connected over home WiFi might become a scripted animal wandering around the world, while a well-connected home PC on an adequately permissive router might be used as a voxel server.
The actual virtual worlds runs on something called domain servers. If you want to set up your own little space in the Metaverse, you’d set up a domain server, which contains a number of subservers for different tasks and I suppose the assignment server can be used to scale it if it becomes popular.
Here’s some of the different subservers:
The Asset Server provides copies of the models, audio files, scripts, and other media used by the domain. It functions like a Web server, but using protocols tuned to High Fidelity’s architecture.
Serving data is basically what SAFE does. What’s needed here for a smooth experience is that textures, 3D models etc are downloaded fast enough, otherwise you end up walking around in a half loaded world for too long
The Avatar Mixer is in charge of your virtual presence in any domain. It keeps track of where you are, which avatar you are wearing, and how you move around the domain (like how you would move your head while wearing a Head Mounted Display (HMD))
Your user profile in SAFE could easily keep your avatar, but here you’re also keeping track of your current position and head position/rotation.
The Entity Server tracks all entities and their properties in a domain, from their description and position, to any behaviors attached to them by script. If an entity is modified, the change is communicated to the entity server, which in turn relays the information to all clients currently visiting the domain.
The Avatar Mixer and Entity Server touches the real issue, updating the state of objects fast enough, especially spatial location and roation. If you’re talking to someone in VR and you’re moving your head around, the latency has to be very low for the motion to look natural. In practice prediction will have to be used. If you’re playing an FPS game and things aren’t updated fast enough, you’ll be shooting at what looks to be your enemy’s position, but in reality it was their position 5 seconds ago, even milliseconds count here.
You’ll want to run physics and rendering locally, but there should also be some consensus about physics that could be asked when needed, for example for FPS games to prevent people from cheating. Physics could perhaps be computed by compute on SAFE once that’s done, but perhaps the latency between the nodes will be too great.
Today, even with much fewer hops than will be on SAFE, prediction is needed to show things at reasonable locations. If the latency becomes too great, prediction will end up getting too far from reality and suddenly your arms position will be jerking around.
I found one post by David describing how these situtations can be handled
For voip and (one to one at least) gaming etc. al the network will do is connection establishment for the nodes. So these will be direct udp connections and as fast as they can be.
The details to be worked out are,
1: Should the network fully encrypt such traffic or leave to app devs to do that. In any case this part is pretty straight forward.
2: For multi user voip and gaming etc. we need to supply decentralised serverless capability. It can be crudely done for a few users, but not in a scalable way (so I don’t like it
). This can take a few forms though and in this case I feel we will have to provide network level API’s to allow negotiation and decentralised compute as well as multi session udp establishment (for speed).
The latter part may still use rudp (or Crux rudp2) with the ability to lose frames (for video, voice etc. this is fine) but maintain the connections via the current keep-alive (pseudo tcp connection oriented approach).
So there is work to be done there for sure, but I think when we launch and many more see how we have done what we do then the games programmers etc. will get the aha moment and answer much of this for us.
I guess users who are at the same place at the same time would need to have either direct connections to each other or connections through some intermediate node(s) to preserve privacy. If it’s a public spot, you can’t know if some of the other avatars or object there are spy/tracking bots. Going through one or more low latency hops to preserve privacy should be possible while keeping the latency low enough I think, it might not stop the NSA from knowing you were there, but at least it could help preventing advertisers from tracking your every move. There is lots and lots of unexplored territory here.