Update 19th December, 2024

Big news this week, apart from the launch of the latest stable release of the network of course, is a new a data type we’re working on to go with our Transaction type, as @dirvine explains.

This will be a Pointer type that always points to the current value. It will use Transaction type to preserve history. In this way we can have infinitely growable true transaction types that are CRDT compliant and easy to always find the HEAD of. It’s a great first use of the Transaction type. and after listening to the community we feel this will offer the best of both worlds and preserve the perpetual data network we all work for.

These types are alongside a ScratchPad type and Chunk where the former allows a lazy unmanaged blob for devs to have app config data, account types and more and the latter being our store of bytes of all kinds, but mostly files.

In our Client API the team have been working on filesystem types that build on these that should allow us to build whole computers in the network type approaches. This is where you can walk up to any computer running Autonomi, log in, see all your data and then log out again, without trace.

On this last stage we do need to look more at in memory filesystems like fuse etc. but we will get there. Given the rich variety of data types though, we feel that apps will be not only be possible, but also truly decentralised and always available with full history of all data. So apps, websites and more should all now be a much more realistic and easy to provide promise. It is now we really need to (a) provide these data types (b) let you folk build on them and feedback to us.

This is the time we have all been waiting for an API that hides all networking and makes the data types clean, clear and available to you. Thank you all again for your patience and perseverance in working with us on this one. As you know the network has taken most of our time, but we are now focused on this API/SDK and delivering a compelling and well documented initial app building experience. Again, thanks.

Latest stable release

Make sure you reset all your nodes to the latest stable release for the testnet (instructions here). And remember If you run nodes consistently this week, next week, and the week after, you will have access to 100% of the 1 million token reward pool. We’ll explain more on the Stages session later.

Prizes and merch!

This week Discord Stages has been moved to today Thursday 19th December, 9pm GMT to follow the Network release. This will give the community the opportunity to ask any questions to the team or raise any feedback post the release. And also, as this will be our last stages before the end of the year, we thought it would be good fun to host a small quiz for the latter part of the event, for those that want to stay and take part - with some Autonomi and network tech questions with prizes and merch for our winners. Looking forward to seeing as many of you there as possible! :trophy:

This is our last update this year. Have fun everyone :christmas_tree:, thanks for all that you do, and see you on the other side!

General progress

@chriso has been the engine driving the creation and implementation of the latest version of the network, while @shu continues to improve our monitoring. This week he has been looking at a proof of concept to ensure a single Telegraf Elasticsearch service can support multiple applications, e.g. nginx, ant, antnodes, etc, frontend and backend. Also as a POC, he integrated MaxMind GeoIP database to convert IP locations into latitude/longitude locations both for ELK and Grafana.

@rusty.spork has been updating the documentation and preparing for the latest net, and he’s always about here and on Discord to field your comments and queries.

On payments and transactions, @qi_ma dug deep into gas fee reduction by tweaking the client code and data payments, and resolved an issue where uploads were failing due to a lack of quotes from nodes. He also ensured the Launchpad works with the new stable release.

Lajos worked on profiling the gas usage of different versions of the payment function in the smart contract, and together with @qi_ma selected the optimal version. He also unit-tested the pricing formula in the smart contract.

As well as supporting the infrastructure upgrades, @roland tested the uploaders for upscaling, and with other team members investigated an issue where we found they were stalling.

Ermine was on the stalling uploaders too. He also worked on refactoring some automation tasks for Client API to support Windows in CI/CD.

Dave’s nearly here

And we know you’re all waiting for Dave. Dave (GUI client app) has had a bit of work done, as we explained a couple of weeks ago and in his buffed up mark 2.0 form, he’s looking especially fine and is very nearly here, thanks to @mick.vandijke and @bzee.

Mick also raised a PR with a few CLI wallet command additions, changes and fixes, and Benno put in some changes for usability. Plus he removed registers from user data. Registers are now deprecated, but they will still be usable until the TGE.

52 Likes

I’m first! :slight_smile:

Wonderful update guys. Looking forward to finally seeing Dave.

I really do hope you guys will enjoy some well deserved time off and spend time to recharge and be with friends and family this holiday. <3

23 Likes

This sounds amazing! Thank you David. We are almost at the end, and at the beginning of something astonishing, cheers

18 Likes

Very happy this is still possible. Well done everyone :clap:t2:

17 Likes

Great update team. Good to see API getting the love it’s needed now that network is stable.

A blockchain is a ledger/DB that everyone can access, but is still held as a whole complete DB by distributed nodes around a network … whereas what we have with :ant: is an actual decentralized DB - not merely distributed; hence a truly revolutionary step forward in design. Having an API that can take advantage of this fully decentralized DB will change how apps work - dramatically increasing security and scalability.

Well done team :ant: and thanks David for all your efforts now bearing fruit.

Cheers :beers: :sparkler: :fireworks:

21 Likes

Well done guys. Am looking fwd to play with Dave. Will make it much easier to demonstrate what it’s all about to friends and family.

15 Likes

5 posts were split to a new topic: New data types - summary and discussion

Thanks to all at Autonomi Towers for this latest update.
Theres a lot to take in, it all looks good after waiting so long.

Also want to thank @happybeing for all the hard slog he has put in and his suggestions above will be scrutinised carefully by @dirvine and others. Very much hope his work will be merged in time or we get good reasons why not.

I’m hoping to learn much more at the Stages tonight, but for now, thanks to everyone for all you have done for the project in 2024.

20 Likes

Brilliant update!

Bring on the API, Dave, and everything needed for devs to start sinking their teeth into creating cool stuff for this network :slight_smile:

Keep up the excellent work team Autonomi… it’s very much starting to take shape.

16 Likes

WOW that was a dense one! so asked Grok to explain:

Telegraf and Elasticsearch for Multiple Applications:

  • Telegraf: This is an agent that collects, processes, and sends data to various databases and monitoring systems. In this scenario, Telegraf is used to gather logs or metrics from different applications.

  • Elasticsearch: It’s a distributed, open-source database for storing, searching, and analyzing large volumes of data in real-time. Here, Elasticsearch is used as the database where all the data collected by Telegraf can be stored and analyzed.

  • Proof of Concept (POC) for a single Telegraf Elasticsearch service: The goal of this POC is to verify if a single instance of Telegraf can handle data collection from multiple applications (like nginx, ant, antnodes, frontend, and backend) and then send it to Elasticsearch. This means:

    • nginx: A web server that can log HTTP requests.

    • ant and antnodes: Programs used for building, testing, and deploying software, which can generate logs related to build processes or network nodes.

    • Frontend and Backend: A way to divide an application where frontend is the client-side (user interface), and backend is the server-side (data handling, APIs). Logs from both can be collected for monitoring and analysis.

  • Benefits of a single service for multiple applications:

    • Cost-efficiency: Less overhead since you only need to maintain one instance of Telegraf and Elasticsearch.

    • Centralized data management: All data from different applications is stored in one place, simplifying analysis and monitoring.

    • Simplified infrastructure: Reduces the complexity of the system architecture.

Integration of MaxMind GeoIP Database:

  • MaxMind GeoIP Database: This database is used to convert IP addresses into geographic coordinates (latitude and longitude).

  • POC for IP to geographic coordinates:

    • For ELK (Elasticsearch, Logstash, Kibana): When IP addresses are logged, the GeoIP database is used to enrich these logs with geographic information. This allows for visualization of data on maps in Kibana, helping to understand where traffic or users are coming from.

    • For Grafana: Similar to ELK, Grafana can use this geographic data to create heatmap visualizations or other geographic analyses. Grafana can visualize this data using plugins like Geomap or other suitable panels.

  • Advantages of GeoIP integration:

    • Enhanced analysis: Enables geographic analysis of user behavior, traffic patterns, or security events.

    • Security: Can be used to detect suspicious activities based on geographic location, such as multiple login attempts from different parts of the world in a short time.

By integrating and testing these components in a POC, one ensures that the solution works as expected in a real-world scenario before moving to full implementation. This is crucial to understand performance, scalability, and integrity between different system components.

14 Likes

Great to hear about progress on the API and the new types. This will certainly unblock my dev work on sn_httpd. I’m looking forward to playing with this!

21 Likes

Thanks so much to the entire Autonomi team for all of your hard work! :muscle: :muscle::muscle:

And good luck to everyone with your Beta Rewards! :four_leaf_clover:

13 Likes

This is great, first time ive participiated in a testnet since 2017 ish!??!

I dont do command line stuff so great to be involved, i have 5 nodes running on my laptop, how many attos will i earn if i keep them online for a week?

13 Likes

Thank you for the update.

@Shu Are you able to shed light on what happened to the store cost value in the /metrics function. Its not there any more, and so my reporting script has no way to see what the node thinks it has earned. Is it coming back, or was it a bug that its missing, or cannot the over reporting bug be fixed? Thanks

@Shu My 8 nodes have earned 3 attos, but the /metrics values for wallet balance and forward balance are both still showing zero for all 8 nodes. Has this been broken too?

@maidsafe can I please ask that in future that any rewards program started is started with a description of what is actually being rewarded. Many have privately said in effect that it is demotivating to not be told and how frustrating it is that no one will tell us when asked directly. If its a competition as Jim puts it, then the goal or motivation should be stated.

Many are running nodes to test the network yes, many thinking that rewarding anything is enough to run nodes. But many are very frustrated as well.

13 Likes

Response from Qi:

On quoting side, antnode no longer give out store_cost directly. So, yes it has disappeared.

Regarding the earnings, as metric only will be notified for the 1 out of the 3 payees, hence there will be a difference between the metric and chain.

6 Likes

Ok so even though I got a payment the node balance may still show zero since it was not the one of the 3 that got notified.

Are you able to quickly mention how that works if the node is not giving the quote but only the metrics for the quote to the client.

Does the client now work out the store cost for itself? If so then what is to stop a client from just saying the median (or average) of the 3 nodes was 1/10th what it really was “(and thats because the other 2 nodes gave metrics for that)”?

EDIT: @qi_ma Shu suggested you might be able to shed light on these questions.

4 Likes

I think you should follow up on your questions with Qi here as he is more knowledgeable on this area of the code base. Thanks.

8 Likes

Thx 4 the update and all your hard work Maidsafe devs

Indeed have fun ants :beers: :beer: :partying_face: :sparkles: :tada: :vulcan_salute: :sparkler: :sparkle:

Keep coding/hacking/testing super ants

8 Likes

Great end of the year 2024! Wish the team and everyone else a great 2025, may we all achieve what we are seeking and with passion, love and all the support we deserve. :clinking_glasses:

13 Likes