Run the metaverse (VR/AR) on SAFE?

2nd Life founder Philip Rosedale has a distributed/decentalized VR social metaverse.
Would the distributed computing and latency of SAFE allow something like this to run?

I want to know because I think this social VR metaverse is closer than Mars and might make Mars plan C instead of plan B in a digital physics sense. And now it seems like there might be enough performance data to guess and being that Etherium is much slower, SAFE may be the only game available for a tamper proof substrate. Interesting to know if Rosedale has come up with something that could contribute to SAFE’s decentralized structure.

2 Likes

Decentralized VR on SAFE would be very interesting.

High fidelity is distributed, but not really decentralized. It’s a bit like web servers for VR, though they have some nice parts for making it distributed.

If we want an anonymous, distributed metaverse built on SAFE, I think it would have to be built from the ground up, although it’s possible that parts of high fidelity could be used.

SAFE for VR/AR is in ways perhaps even more useful than for the web. AR is the next big platform after smartphones and the tech giants Facebook, Google, Apple, Microsoft are investing billions to try be the leaders.

The investments into AR and VR are great and it has a lot of potential, but there’s also some downsides. Both AR and VR needs eye tracking to be really useful, for foveated rendering, user interaction and much more. It will be really awesome, but it also means that Google or Facebook will be able to track your every eye movement, see how long your eyes linger over an ad, see if your pupils indicate you find an ad interesting etc. This is really great for advertisers, ads can be much more personalized and it will be easy to see if users find ads interesting, where to place ads to get the most visibility and so on. Obviously there’s some privacy concerns here, your eye movement might give away more of your thoughts than what you’d be comfortable with Google and Facebook knowing. Apart from eye tracking there’s the whole issue of the new digital world being controlled by a couple large corporations. One thing is that Google and Apple own the smartphone app stores, another thing is if they own similar app stores for AR apps. As AR technology improves, light-field displays will eventually get to the point where digital objects can look completely photorealistic and when you’re wearing AR glasses, you won’t be able to tell the difference between real physical objects and rendered digital object. The color and texture of actual physical object can also be changed to look like something completely different. This means Google, Facebook and Apple won’t just control what you read online, but what you see and hear when you’re wearing AR glasses, which in some years will probably be several hours a day.

Do you want your reality controlled by Facebook? If you’re on this forum, the answer to that question is most likely no. One problem with high fidelity is that as it’s not really decentralized, it will likely be bought by one of the giants if they’re successful, my bet is on Facebook.

Let’s have a look at how high fidelity works and how something like it might fit into SAFE. (Can’t guarantee I’ve understood everything correctly about high fidelity though, my understanding is based on some rather light documentation on their website)

High fidelity (will) run a set of centralized servers for certain core services, namely authentication, place names ( like DNS), discovery (like a centralized search engine I guess), the assignment server (a marketplace for selling computing power for running virtual worlds) and marketplace. As you can see, these are things that would be perfect to run decentralized on SAFE. Safe already has decentralized authentication and user profiles built into the network, same with DNS. A decentralized marketplace is still missing, but is one of the most obvious applications to build once the network is up. Then there’s the search and discovery part, building that would go hand in hand with building search engines for other SAFE content. High fidelity is planning some kind of currency, like in Second Life. I can’t find any up to date information, but it looks like they’re considering either making their own blockchain or making a centralized currency like they had in Second Life. In these ICO hype times perhaps they’ll make an Ethereum ERC20 token, who knows.

The assignment server is rather interesting. It is kinda what SAFE will do with farming nodes and to assign certain high performance nodes to become archive nodes, but with a mix of a computing marketplace kinda like the Golem Project on Ethereum. One big difference is that in high fidelity it’s a centralized service.

The Assignment Server is a High Fidelity service that allows people to share their computers with each other to act as servers or as scripted interactive content. Devices register with the assignment server as being available for work, and the assignment server delegates them to domain servers that want to use them. Units of a cryptocurrency, will be exchanged by users of the assignment server, to compensate each other for their use of each other’s devices. The assignment server can analyze the bandwidth, latency, and Network Address Translation capabilities of the contributed devices to best assign them to jobs. So, for example, an iPhone connected over home WiFi might become a scripted animal wandering around the world, while a well-connected home PC on an adequately permissive router might be used as a voxel server.

The actual virtual worlds runs on something called domain servers. If you want to set up your own little space in the Metaverse, you’d set up a domain server, which contains a number of subservers for different tasks and I suppose the assignment server can be used to scale it if it becomes popular.

Here’s some of the different subservers:

The Asset Server provides copies of the models, audio files, scripts, and other media used by the domain. It functions like a Web server, but using protocols tuned to High Fidelity’s architecture.

Serving data is basically what SAFE does. What’s needed here for a smooth experience is that textures, 3D models etc are downloaded fast enough, otherwise you end up walking around in a half loaded world for too long

The Avatar Mixer is in charge of your virtual presence in any domain. It keeps track of where you are, which avatar you are wearing, and how you move around the domain (like how you would move your head while wearing a Head Mounted Display (HMD))

Your user profile in SAFE could easily keep your avatar, but here you’re also keeping track of your current position and head position/rotation.

The Entity Server tracks all entities and their properties in a domain, from their description and position, to any behaviors attached to them by script. If an entity is modified, the change is communicated to the entity server, which in turn relays the information to all clients currently visiting the domain.

The Avatar Mixer and Entity Server touches the real issue, updating the state of objects fast enough, especially spatial location and roation. If you’re talking to someone in VR and you’re moving your head around, the latency has to be very low for the motion to look natural. In practice prediction will have to be used. If you’re playing an FPS game and things aren’t updated fast enough, you’ll be shooting at what looks to be your enemy’s position, but in reality it was their position 5 seconds ago, even milliseconds count here.

You’ll want to run physics and rendering locally, but there should also be some consensus about physics that could be asked when needed, for example for FPS games to prevent people from cheating. Physics could perhaps be computed by compute on SAFE once that’s done, but perhaps the latency between the nodes will be too great.

Today, even with much fewer hops than will be on SAFE, prediction is needed to show things at reasonable locations. If the latency becomes too great, prediction will end up getting too far from reality and suddenly your arms position will be jerking around.

I found one post by David describing how these situtations can be handled

For voip and (one to one at least) gaming etc. al the network will do is connection establishment for the nodes. So these will be direct udp connections and as fast as they can be.

The details to be worked out are,
1: Should the network fully encrypt such traffic or leave to app devs to do that. In any case this part is pretty straight forward.
2: For multi user voip and gaming etc. we need to supply decentralised serverless capability. It can be crudely done for a few users, but not in a scalable way (so I don’t like it :slightly_smiling: ). This can take a few forms though and in this case I feel we will have to provide network level API’s to allow negotiation and decentralised compute as well as multi session udp establishment (for speed).

The latter part may still use rudp (or Crux rudp2) with the ability to lose frames (for video, voice etc. this is fine) but maintain the connections via the current keep-alive (pseudo tcp connection oriented approach).

So there is work to be done there for sure, but I think when we launch and many more see how we have done what we do then the games programmers etc. will get the aha moment and answer much of this for us.

I guess users who are at the same place at the same time would need to have either direct connections to each other or connections through some intermediate node(s) to preserve privacy. If it’s a public spot, you can’t know if some of the other avatars or object there are spy/tracking bots. Going through one or more low latency hops to preserve privacy should be possible while keeping the latency low enough I think, it might not stop the NSA from knowing you were there, but at least it could help preventing advertisers from tracking your every move. There is lots and lots of unexplored territory here.

1 Like

Thank you, I really appreciate the analysis.

1 Like

A bit more about the AR part. AR and VR have similar needs, but right now the software is a bit separate. High fidelity is made for VR for now, but the metaverse should eventually support both and mixed areas where some people might visit a 3D model of somewhere in VR at home, while other people will be at the real location and the people at the real place can see and talk with the VR visitors.

AR basically just needs some extra tools for tracking the real world and placing digital objects onto real objects. I think we’ll see open source versions of things like Apple’s AR kit eventually.

Once you have open source versions of the needed AR tools, setting up an AR or VR world will be similar. For VR you would make a number of digital objects and place them in a coordinate system representing a virtual world, while with AR you’d place them in a coordinate system representing somewhere on earth (for now) and you could optionally restrict the objects from going outside some boundary.

1 Like

I think the way do make something like this on SAFE would be to use Web GL and WebAssembly though. Right now there’s not much AR standards for the web yet, but they’re coming. VR is starting to get supported already, but I guess the SAFE browser may need to be modified to support VR goggles like the Oculus Rift. Firefox and Chrome supports this already.

If it’s in the browser, then you would just go to a URL to go to some virtual world. For AR, you would have an app for indexing URL’s that contain worlds or items for your current location. Basically you would have geotags on your AR content that AR search engines could index. Then when you’re somewhere you could use the AR search app to see which content/layers is available at your location and enable that one by going to the URL.

The thing that high fidelity solves is how to connect worlds together. If you set up a virtual world on SAFE you would persist a bunch of virtual objects, including the landscape and then when you connect you would download the content and load it into a game engine like Unity or Unreal Engine. The next thing you want is to enable multiple users, that would require the low latency connections talked about earlier, so I guess there’s lots of work to get that working. To make a really huge world or even collaborative worlds made by different people, we’d need just an Id for the world, then coordinates for all the objects and some permission for who can add objects, how many etc etc. Going from one world to another one is simple, you just make a portal and assign a SAFE url that portal, so when you enter the portal, that’s basically just clicking a link and you can bring you avatar and profile with your items with you to the next world (depending on what that world supports).

1 Like

The thing with the web is that it isn’t a natural fit for the metaverse. The problem is the web is based on links, you can make portals between worlds to get a 3D web, but what you really want is to have infinitely scalable world where you can just add new content to no end. You could do this on the web, but then you would get one large 3D website with a huge database in a datacenter, it would basically be Second Life on the web. With SAFE we already have a pretty much infinitely scalable public database, but we need good ways to basically stream 3D models, audio and textures from SAFE as you walk through a virtual world, but the core database is there and it’s not controlled by anyone. I imagine we’ll at some point have multiple worlds that are open for modification and content creation to the public and that we’d get filters and different ways of showing just the content you want, since as Second Life has shown, it’s likely to be filled with flying penises, so it would be nice to have a way of filtering these out.

2 Likes