Last week, we released Test 18. The focus during this week has been on getting all the issues that need to get completed for Alpha 2 (including those reported by the community) added to JIRA and making sure there are no loose ends.
With Test 17, we had the rate limiter quite relaxed and a lot of traffic was allowed. In Test 18, it is kind of the opposite since there are many more reattempts and retries that need to be happening from the clients. We are now working on striking a good balance between these two extremes and we’ll be updating the rate limiter to improve its performance and give a better user experience in terms of data traffic. With these changes, a lot more future requests should go through in the first attempt.
Advisory and Endorsement Requests
Members of the team, especially David, are getting quite a lot of direct approaches, either through lengthy DMs and LinkedIn messages, from people looking for advice on other projects, or to endorse ICOs. While we would like to be in a position to help, our focus on network development does not allow us time to read these messages in many cases, let alone respond. Please do not be offended by this, it doesn’t mean that we don’t see an importance in what you’re doing, we are simply very busy and your time is likely to be better spent engaging with other potential advisors.
SAFE Authenticator & API
Issues that were raised in Test 18 have been gathered and organised in JIRA. The new IPC for sharing MutableData is being integrated this week. Integrating the IPC functions in safe_app_nodejs and authenticator is almost completed. This feature has been the priority for us this week and @bochaco and @shankar are getting this integrated and tested end-to-end with the applications.
We have also fixed a few issues this week. The native modules were not being loaded by safe_browser on a few Windows machines which did not have the 2015 Visual C++ redistributable. When the user tries to log in, the authenticator kept spinning and it was misleading as if it was not connecting to the network. We are now handling this case and the proper error message is presented. safe_browser issue #99 is also resolved.
When the browser is trying to load a page even if the requested page/resource is not available, the loader in the browser keeps on spinning. This issue is now resolved in the master branch of beaker-plugin-safe-app.
Based on feedback from the dev community, the getHomeContainer
API is being renamed to getOwnContainer
. This API is used to fetch the application’s own/root container. The DOM APIs have also been updated with this change. Please do keep a note of this change in case you are trying to build the browser or your apps with the latest code in the master branch of safe_app_nodejs and beaker-plugin-safe-app. Also, @hunterlester will be resolving safe_browser issue #100 soon.
The Web Hosting Manager app updates the progress bar only after the file is uploaded to the network. If a big file is uploaded, the progress bar takes a long time to update, leaving the user guessing whether the application is hung or still working. This will need to use the NFS API to write as a stream instead of passing the entire content at once. safe_app_nodejs already exposes APIs for the same. The Web Hosting Manager app must be refactored to use the API to write as streams and update the progress bar as chunks of data are being uploaded. @hunterlester should be able to resolve this issue this week.
@joshuef has raised a pull request for storing the history and bookmarks in the network. The feature looks stable after a couple of testing iterations and we’re expecting it to be merged soon. @joshuef is looking into the DOM APIs to enable returning the error code along with the error message. Right now, only the error message is returned as beaker browser does not allow the APIs to return complex objects. @joshuef is trying to implement this feature for returning errors.
SAFE Client Libs
The back-end team has been working hard on fixing bugs reported by the front-end team and the community, and improving the features that were released last week. We’ve started with refactoring the account packet structure and improving safe_core
, the essential parts of the SAFE Client Libs. The account packet is the data structure that holds the user account details, keys, and some extra information, such as pointers to the root directories - and that is the part that we wanted to improve because it remained almost intact since the SAFE Launcher days, while now we have a much more complicated design, with apps and the Authenticator. So now we’ve got rid of unnecessary indirections (the pointer to the access container is now stored as a part of the account packet) and inefficiencies (the root directory has been removed and the containers info is now stored directly in the access container). Along the way, we’ve reduced the number of network requests and parallelised some operations (which previously introduced useless synchronisation delays), so overall these changes should make the UI more snappy. They’re mostly completed and likely to be merged before the end of this week. For more details, you can see this JIRA task.
In parallel, @canndrew and @adam have been working on improving the new API for sharing Mutable Data. The basic implementation has been completed and merged and now we’re working on implementing useful comments and suggestions from @bochaco and the front-end team. First, we’ve decided to clearly define the key for the metadata entry of shared Mutable Data objects because allowing an app to provide its own metadata key could lead to security issues. For example, if a password manager app sets the metadata entry to say something like “this data must not be shared with other apps”, another malicious app requesting shared access could point to some other entry to be looked up for metadata, effectively bypassing the caution message and making a user confused about the intent of the sharing request. Second, the permissions part of the request has been simplified: now we assume that an app wants to request only Allow
permissions, delegating more fine-grained permission settings (involving blacklists and Deny
) to apps having the ManagePermissions
privilege. All these changes go in this pull request and we’re currently reviewing them and perfecting the details.
Next, we’ll be focusing on optimising apps revocation. We’ve started with this pull request, which fixes some flaws and edge cases, such as attempts to authorise an application during an ongoing app revocation. This is not allowed now, as authorisation might interfere with re-encryption of revoked containers. Another issue could happen if an app tried to update or insert new data into a container which failed to be re-encrypted - in some circumstances it used an incorrect set of keys and mangled the data. Tomorrow we’ll continue addressing similar minor (but crucial) problems.
On top of that, there was a lot of smaller improvements. @marcin has been doing important work and added more automatic tests, confirming and fixing bugs found by the front-end team, such as a case when subsequent app re-authorisation didn’t result in the creation of app’s own container (if it wasn’t created on the first request). @marcin also added more NFS tests and verified that a user can still log into his account and browse the network even if he’s exhausted his account balance. He’s also removed a legacy module, public_id, which remained from the SAFE Launcher and wasn’t actually used by the front-end. @canndrew has updated the Routing dependency to the latest version, removing the necessity of retrying requests if a request rate limit has been exceeded - now that this is the Routing’s responsibility, we’ve considerably simplified this part of the code in safe_core
. Finally, @adam has implemented an environment variable switch that disables the on-disk vault storage for tests. This change should speed up both front-end and back-end tests a bit.
Routing & Vault
As stated in the introduction, the rate-limiter for Test 18 is stricter compared to the previous test network. Message resend logic has also not been too efficient as Routing breaks the user messages from upper libraries into user message parts, each of which can be a maximum of 20 KiB. So if there was a user message which was 200 KiB, Routing would have broken it into 10 chunks of 20 KiB each. If a few of them were rejected, we’d currently be sending all parts again from the Client libs. The user library would then have to resend the entire message again after a pre-defined waiting period. To optimise this, we’ve modified Routing to handle the message resend at that layer directly. With the new changes, Routing will now selectively re-transmit the user message parts for which the error occurred. This encapsulates the message resend logic away from the upper libs. Routing also optimises route traffic by allowing the upper libs to indicate a maximum wait period after which message relay via further routes will not be attempted. This not only applies to rate limiter errors but also other parts such as the Ack-Manager.
There have been a few patches that have been brought in to master to address certain bootstrap bugs from Test 18 and the Rate limiter has also been refined this last week to allow clients to scale their data traffic capacity based on concurrent client connections active at the proxy at a given time. The PR addressing the same is currently waiting to be reviewed and merged. Rate limiter throughput calculations have also been updated to reflect the actual data throughput relayed by the proxy nodes to the clients.