Concerns have been raised about testing or rather the lack of it.
We all know the devs are busy busy busy ants these days and cannot do everything we would wish.
I think we now have a few days window in which to discuss if/how the community can step up and lend a hand.
I’d like to hear from the rest of you on the following points
Is the concept of a community testing initiative feasible?
If so, how should we go about this?
Testing uploads will cost attos, should we appeal to Autonomi for some help here?
How many folk do you think we could get to commit to 10-15 mins work each day until TGE with or without incentives - allowing for holidays of course.
Obviously an awful lot more to discuss - I’d love to hear your thoughts.
While feasible, it takes organisation. Requires a number of people to commit to it and that is realistically the hard part. If you thought business meetings were bad at getting anything done, then community organisation is certainly not any better.
Well ideally we need to be testing against the latest stable release.
A plan of testing needs to be drawn up. People need to make a list of parts they can test and have many doing it at the same time for many tests of stress.
EG
uploading of many small files at once. This tests the rate of chunk storage, not data volume but record volume
uploading of large many GB files at same time
uploading a range of sizes at same time
uploading a range of sizes while others are downloading many files (at the same time)
all tests need to have the unix “time” (EG ">time autonomi files") and best to have prepared scripts to do this
and other tests people have like site browsing
It is likely we will have to limit it to linux since windows has its known issues making testing perhaps not so reliable except by windows experts.
Definitely, and gas too as many will not have any/enough
I’d say we need to have coordinated times to start a script for each set of tasks and if using the “time” placed before the command we can get the time taken for each test.
So we need what tests then someone to build a set of scripts to run that will allow the person to start the script and walk away. Then at worse the user will have to note down the times taken. Of course if running scripts then using $SECONDS allows the script to calculate the seconds taken to do something and write that out to a report file
upload to wellitwascheap.online and I’ll collate them from there
thanks - saves me writing that out
Absolutely and this is my big fear - it will be too little too late - but if we get enough interest then maybe its a worthwhile task,
Also
I’d suggest that we don’t restrict it to linux only - we can only (easily) get times from linux boxes but having Windows users adding to the overall load should be OK.
I’d be happy to help. Maybe you could set up a poll asking to answer yes if you can commit to helping with this. If it is a script that could be set to run on a given day, and could be provided to everyone, if people have time commitments, they could just start the script running in the morning, which would deploy the uploads etc at given pre-set times within the script. Just some thoughts.
Yep this is what I’m thinking.
How is your scripting? Are you any use with Powershell? cos I’m not.
At its simplest , we would need someone to look at the linux scripts and do them in Powershell whithout the time reporting - unless of course that could be easily handled in PS
I’ll put a poll up tomorrow if see much more interest.
Doesn’t Windows 11 have a linux sub shell that can be run. I’ve seen people do stuff using linux in that system on windows 11 and its not a virtual box sort of thing but built into windows
To start a script at a certain time, to the second
starttime="17........" # linux time stamp for the time you want to start
nowtime="$( date +s )"
sleeptime="$(( starttime - nowtime - 30 ))" # allow 30 seconds as a buffer
if (( sleeptime > 0 )); then sleep $sleeptime; fi
nowtime="$( date +s )"
while (( nowtime < starttime )); do sleep 1; nowtime="$( date +s )"; done
starttask="$( date +s )"
############# do the task
endtask="$( date +s )"
taskelapsetime="$(( endtask - starttask ))"
echo "Task xyz start:$starttask end:$endtask elasped:$taskelapsedtime" >> test-report-file
Then rinse and repeat for more tasks at their set time.
Suggest a large gap between tasks start time since some computers/nodes will take a lot longer than others.
Yes, that is correct. I don’t usually use Windows, but I do have one running and just installed Linux from Powershell. You do have to do that one time install, at least on my version of Windows 11. It is called wsl: “Windows sub-system for Linux.” What is Windows Subsystem for Linux | Microsoft Learn
I haven’t put much time into thinking about testing performance because Verifi is primarily focused on integrity, but what if a run of it optionally saved download times to serve as a baseline so subsequent runs could report deltas?
tl;dr; I’m happy to help, if some of the others are ready to pitch in.
Obviously feasable, 6 months too late, and we have to ask ourselves what we are looking to achieve - dragging some of the “issues” into the open might get “push back” - if it’s not clear by now, Bux will launch this at TGE and there is nothing we can say or do to change that, we will be labeled again as complaining, negative, [… long list of negatives… ]
Github would be my preference, code and process needs to be out in the open for scrutiny, and trying to code via a forum would be a nightmare - good for coordination though.
I would like to see a blueprint on Github covering objectives of this test, and ideas around testing , so people who agree to help know what the aim is.
We need to have a communicated kill switch, probably via github that can stop the test via code.
We need some test plans, that can be coded into maybe something cron style that we can use to coordinate the test agents.
I can see an “agent” being put onto a testers machine, that would download a test from github, and then run the test on schedule - the output from the test needs to go somewhere, so maybe upload that back onto autonomi, and then share the public link onto the blockchain, or back onto github.
Someone with Grafana skills can then scrape the test results to give us some metrics.
We should target Linux first, other OS’es can come later but it makes the dev really complex trying to code for everything.
Bash and Python would get us something basic, other languages are available
No, we’ve asked the last 9 months for some developer atos, they arn’t interested. We should community fund this - I would say a community ERC-20 address that people can donate to their attos.
The testing needs to supply agents with ERC-20 addresses, and load them for participants - to be clear, no one should be using their proper ERC-20 wallets or addresses for this, it’s just not “safe and secure” at the moment as private keys are exposed.
Long term, we could even think about a community reward scheme, where we provide attos back to agent runners, but that will have to wait for native…
Not many, the timing is so bad - holidays are coming, so resources limited…
What I would say is that I see a future for the community testing, I can see an agent being run on peoples machines who opt in, to allow distributed stats on the network to be collected and displayed - we can track upload times, download times, CRC checks to data corruption, and many more - this is good as it will help grow confidence with the network, don’t trust verify…