Some observations on my use of ant 0.3.8 RC version

I have been trying to do test uploads of 3 chunk files (4 chunks total with datamap) and was testing with V 0.3.7 of ant and then with V 0.3.8 RC and then V0.3.7 again

So this is a specific type of random file that is 10MiB in size and each different.

The testing considers the upload a success if I can get the file address (datamap address I believe)

In all 3 test I did between 70 and 120 uploads on each machine taking many hours each
I did the tests on 2 machines (4 core intel CPU, 8GB RAM, 2 TB ssd disk)
in both cases the success rate was 1 in 10 (10% error margin)

While not accurate there was 140 uploads at least for each of the 3 tests. 70 0n each machine

Total of 420 uploads attempted and all 3 gave similar results.

I post this here in the hopes it will help inform the team on some external testing and while not as controlled as I would consider good, it does show that for my test random files that the upload success is similar.

10 Likes

We know if a bug right now in the “getting quote” metric that causes failed uploads. A fix is in the pipeline for a hot fix right now. We should see that bug gone in a day or so, depending on how long a hot fix takes to ripple out.

14 Likes

This is why i posted here in case the 0.3.8 version was supposed to fix that. Just so teh team is informed

I am finding that it gets past quoting and actually pays for the upload then fails during the upload

7 Likes

What error message do you get instead? Would be interested to know if I’ve seen it too.

2 Likes

Basically it gets to uploading the file and hangs for ages, then says failed to upload file

It has passed the quoting phase and has paid for the records and is in the upload phase

4 Likes

Thanks for the testing.

As David alluded to though, there are fixes in the node that are necessary for the upload performance. I will be really interested to see all these tests running in a week or so, when hopefully people have upgraded.

We will really need people to be active in upgrading their nodes in these early days.

12 Likes

As an update, I am running the test script on a VPS (had to dust off my credentials LOL) and seeing the upload finishing within 15-20 minutes, instead of waiting nigh on an hour only to be told it failed.

Testing V 0.3.8-RC.1

A lot more successes to in the testing. After just a few I am seeing 5 out of upto 8 attempts. Bit of not knowing how many attempted since a couple may not have finished and still be successful

There is a difference between home (on good internet and good router) and a vps with what should be better networking

4 Likes

Are there ways to make nodes not updating maybe get reduced earnings or something?

I hope that all updates but I get a feeling that not all will upgrade.

2 Likes

I’d say that’s something definitely worth thinking about. I’ll take it to the team.

5 Likes

Yes or maybe shunning for not upgrading.

1 Like

I like this idea, check what version before you confirm connect to peers, if old, dont connect

1 Like

Just remember we cannot be Toooo eager to reduce or shun since updates could come in batches, new features then couple of fixes soon after. If mandate every update has yo be done or shunned/penalised then that is too much.

Many policies have no updates until proven valuable. So you have to say make it only on certain types of updates. OR maybe on major updates 2 away.

Its not as easy as saying have to apply latest update NOW or be shunned. That would segment the network immediately after a few updated.

You also don’t want people thinking they have to upgrade within an hour of release so as not to be shunned. This would cause massive churning and maybe even some loss of the data

3 Likes

Fair point. You’d have to be careful as you could interrupt the churn of chunks.

1 Like

No emissions if not updated in 3 days.

An update, the gas fee jumped over 500 times and chewed through my 100$ of gas super fast at 10 dollar and more per 4 chunk upload.

This is real bad @chriso @dirvine since we don’t know (not informed in the ant upload) in advance what the gas fee is going to be, we cannot tell the uploader to not proceed if gas will be above a certain amount. Let alone if the amount of ANT is over a certain amount.

The gas fee was approx 2 cents for the 4 chunks for a while then rose quickly to well over 1000 cents. At 2 cents my 100$ should have uploaded on the order of 5000 test 4 chunk files. But at that rate it was 10 files.

This is not good in my opinion that the gas can shoot up so much and we have no way of knowing. If ant file upload even said what the gas was for an upload then it would be possible not to upload anymore till it drops again

12 Likes

And didn’t we all know this would happen?

Autonomi is in a trap and I don’t know if, one day, will be able to escape.

3 Likes

N a t i v e t o k e n

2 Likes

Its not so much that it shot up, but there was no way of knowing and given an option not to proceed. Maybe allow a limit and the upload is cancelled or the user is giving the option to proceed.

3 Likes

Eek… many questions were asked about this, and I was hoping the team had it figured out after considering the feasibility and scalability of Arbitrum.

If this isn’t a glitch & Arbitrum fees are looking like they’re enough to cripple the network, then work on Native Token solutions must begin ASAP, whether from the team, community, or both.

I’d be willing to contribute to a community fund to get native-token focused devs in to get a viable solution in place, in collaboration with the team ASAP.

We have ANT on Arbitrum, which will still be useful with higher fees for on/off ramps & smart contracts.

But it seems we need a Native token + 2-way Bridge ASAP so the network can function as intended for uploading & cheap/fast payments.

But, I kind of hope it’s a glitch and fees drop back quickly so this isn’t quite as urgent!

7 Likes

This is the tactical fix we need.

If gas prices fluctuate this wildly, we need caps/ranges and a retry window before giving up.

7 Likes