Pre-Dev-Update Thread! Yay! :D

David posted on Discord where he lays out just why he is interested in Iroh to help get more folks connected.

Id advise folks to go read that, do a little thinking and then perhaps reassess the mince that has been posted here recently.

I dunno why folk cannot see the drive behind recent developments

  • a serious move to streamline and optimisr client networking
  • a major change to actual network protocols to ensure as many folks behind “difficult” NAT are not left out

It makes sense to me - but maybe I’m not paranoid enough for the rest of you?

3 Likes

Indeed, and perhaps i should not have been doing my learning (slop? fair enough, but not to me and my development as an individual)out loud on the forum, it’s certainly appears to be annoying people. Maybe we need an ai thread to discuss things so if anyone opens one up, i’m happy to listen to both sides.

1 Like

I thought we had one…

https://forum.autonomi.community/t/using-ai-to-help-code/37761

2 Likes

Okay - I’ll bite - In the ai summary it said it would be

Which it isn’t…

We currently have a network of somewhere around 5 million nodes - maidsafe just runs some bootstrap nodes and all network contactable nodes can act as relays for others.

We can stream 4k movies without issues,

Node operators get ant for their service they can be expected to open a port in this phase of the network imho.

I suspect the motivation is:

Trying to enable the devs to focus on the right stuff instead of doing connection debugging.

Which is a valid motivation but since imho we’re seeing issues with overloaded machines running bazillions of nodes (bottleneck disk drive operations) that introduce multi second latencies as soon as data is not solely handled within ram I doubt it will resolve the issues we’re seeing and just introduce additional bottlenecks due to the relay node system used by iroh (the relay server doesn’t only need to open a port like it is now with p2p but you need a official domain name too that you pay for
+assign that to your server if I read that correctly… Way way larger effort… And their system died when ‘close to a million’ nodes joined within 12 hours with too few relay servers (which are heavily throttled… Unthinkable to run 4k movies over those wires) … We want billions of nodes..)

3 Likes

The precedent has already been set that the domain name system is vulnerable to censorship. And not just in authoritarian countries, at the registrar level.

If Iroh requires a domain to function that would introduce brittleness to the network and be counter to its goals.

1 Like

Huh - I don’t read iroh in that post - you mean that he’s eager to get as many people as possible connected?

And is this really the issue at hand..?

2 Likes

Ooooh or was dirvines commit that large because he just included parts of the iroh code :exploding_head: that to the nice Nat traversal stuff and maybe some other performance based routing etc..?

That for sure wouldn’t hurt and would be pretty cool

2 Likes

here a very insightful post by dirvine about this matter :slight_smile:

4 Likes

copy n paste for people

@riddim iroh like libp2p would give some centralisation, but we AVOID that in BOTH. Here’s how. The centralised parts are relay servers and STUN servers. For us these can be part of a node as currently we do with relay. So we avoid ANY centralisation. So that’s not a worry with iroh or liibp2p for us.

In addition to this Quinn needs an update but the IETF have put forward a QUIC draft for NAT traversal without STUN, so there is a likelyhood we can use QUIC extentions for NAT traversal and not then even need iroh or libpp2. These IETF agreements are extremely welcomed and we have waited 5 years on this news. You can read about it here Using QUIC to traverse NATs

This also links to the other centralisation issue of “cold start” i.e. you need servers to bootstrap from, but what if there was a new way to tell folk over the phone or in quick text message who to connct to and then they collect addresses (this collection is what our bootstrap_cache does and that is important). So to get over this codl start and ability for folk to share netwkring endpoints (horrifically complex things, esepcially with ipv6) So the cold start without any centralisation is vital.

So what if we had a mechanism where 3 english words is all you need to connect to the network. 3 wrods where anyone can share them over the phone and so on, so look here GitHub - dirvine/three-word-networking: Human-friendly three-word addresses for network multiaddresses. Convert complex addresses like /ip6/2001:db8::1/udp/9000/quic to memorable combinations like ocean.thunder.falcon where we cna do exactly that (6 words for ipv6).

To get there, we need our internal KAD impl, thorough testing and a network abstraciton layer. Then we can progress to a massively capable NAT traversal, close to 100% connectivity and in a way we have address hand off, relay handoff (never relay actual data, but connections) and all with way way less code and way way more capabilities wiht data xfer at maximum possible speed with the highest possible levels of security.

Then TLS 1.3 etc. with ed25519 or older tls 1.2 with more round trips (4 instead of 3) and quantum security issues, which are real, but we have quantum secured data underneath, then we are quantum protected in our lightweight, secure and minimal fully connected network.

It’s a lot of moving parts and much of it will be lost on many folk and almsot all of it will be invisible if we do it right, but it gets us where we want to be.

Privacy Security and Freedom for every person on the planet in this digital world

8 Likes

Some really great stuff here.

2 Likes

Exactly - the human readable approach to IP addresses is exciting

Lets hear some ideas to leverage this to improve the “marketing”.

Which IMNSHO should only really start once we have a robust tested product.

That should not stop plans being made now though. Maybe there IS something useful TAB and b2b can do after all?

BTW - if going balls-out on making the network acessible to as many as possible via the pivot to Iroh and the new CLN isnt doing the ground work for “marketing” I dont know what is.

This network will be nothing without users. Users who can seamlessly connect and dont even need to know they are using Autonomi.

4 Likes

My apologies - I said that but we were discussing AI generated garbage and I was referring to the description which was not labelled as generated until the very last line, which I take offence at because it wasted my valuable time and attention.

Looking at the PR it was reasonable to speculate that the code would also be AI generated and I’m not clear whether you are saying it didn’t contain generated code. And we’re not told why it was around for a day and then closed.

Communucation and clarification helps avoid these spikey interactions, so I’d rather we focus on that.

Thanks for engaging though Chris. I hear you and respect you for that. You are trying to straddle a wide gap that has been blamed on some of the biggest supporters in the community, but Autonomi is not interested in closing it, just blaming, and I’m disappointed about that.

1 Like

There you go again with the insults and to be frank, arrogance.

You know fine well after a decade on the forum that i have zero aptitude (hence the reason for using ai) in the content posted and you also know that i wasn’t pretending or trying to make out that i did.

Turns out both of us were likely wrong, you more than me though.

As for “wasting my valuable time and attention”? You could have ignored it.

Interestingly, the MVP thread has 9 likes (thank you to those who did? Haven’t looked) and some chat going on.

Nine likes and some interactions from a “generated” idea, plus content that you call garbage . Well, i replaced the images and content (still editing) with 100% organic me , happy now? There is another (concept) thread with “generated” content, will i be swapping that out too? Absolutely not!

1 Like

Mark, do you really think @dirvine would inflict code (and significant code at that) that he himself was not 100% happy with?
I get your ideological opposition to AI and Im not 100% convinced you are entirely wrong.
However as I said before, if anyone in the present company can make AI work as the user intends -its @dirvine

Take a look at GitHub - dirvine/brain: NeuroEvolution Experiments

OK I am an unashamed David Irvine fan and have been for over a decade - that aside -
does that Git Repo look like the work of someone who is being fooled by AI or someone on top of their game making AI work for him - and us?

1 Like

If for nothing else, this needs repeated. Hourly.

Yes we were - some intital work was done, it became clear that further meaningful progress was impossible until more/better testing framework was in place.
So ts shelved until then.

I really don’t see what the problem is with that. Looks like 100% Common Sense to me.

Yes, the PR contained code that was AI generated. Everyone on our team is using AI as part of their development process, so every PR will have at least some AI-generated code.

It was closed because we decided the work needs to be split into smaller chunks.

Being nice and polite, not calling us unprofessional, or our work garbage, also helps.

David has pretty much left the forum now because of these kinds of things. Hopefully he might return at some point.

14 Likes

Based on his trust of AIs I can’t say. I do not understand his attitude to LLM use, and see problems with what he says and how he says it is useful. There was a time when I would have simply trusted he knew best, but on this I don’t because things he’s said were not convincing to me, and his predictions about the trajectory have not panned out - as I expected they would not.

It isn’t ideological at all but I’m not going to debate it. I’m busy and my priorities are elsewhere.

I’m also a David fan and it is his writings and his way of dealing with questions and criticism back in 2014 that got me to back this project and stick with it. I wish I could say that he’s remained true to that, but the project turned 18 months ago and since he’s been back I don’t recognise him in the project, or how he interacts with us who stuck by the project all these years (and still do).

3 Likes

I think nobody needs to be put into the corner - neither users of llms nor people who think they’re useless.

I can only speak for myself but I’ve learned quite some nice tricks since I started using llms and I never navigated code as fast as with the help of ai.

It’s very true that it’s a difficult tool to use effectively and even when used ‘wrong’ it still may ‘feel good’…

But it’s advancing at fast pace. And I think it’s safe to say the quality of the output (and how well it fits to the task at hand) depends heavily on the provided context and the model itself… So results from claude-3-opus (or 3.7 thinking) or o3 that are integrated into an IDE are absolutely not comparable to tests with free tiers of some chat interfaces where you put code and some instructions.. (especially if those tests have been a few months back)

I think the World by far overestimates their capabilities… But I’m very sure you underestimate their usefulness @happybeing..

3 Likes

Never called your work garbage, called some things that you did unprofessional IMO but never said you, anyone else or Autonomi were unprofessional. There’s an important distinction there but it is repeatedly ignored in responses - not just by you, but David and others.

When David last commented he made similar inappropriate inflammatory, blaming accusations (saying that Autonomi had been called bad or evil for example - show me where) and when I said that was wrong and inflammatory he left.

In the past he was willing to listen and not blame. Things have changed for the worse and that is why many in this community feel disrespected and unheard, although only a very few state this openly.

My doing so is not always perfect, but is always an opportunity for Autonomi to listen, understand and make changes. But, as we’ve seen IMO, this has not been taken seriously and certainly has not helped bring us back together. I’ve given up on that, only responding because you do care and do engage sincerely, although I do not think you understand adequately, or are able to change this without a bigger change among the officers in charge.

My interactions with you and all the coders have always been positive. We all make missteps, I certainly do, but I see very little of that from developers - and I have received some explicit support from individuals in the past - so I believe that it is not only myself and other long termers here who see some of the issues I raise as valid, but includes some inside Autonomi. Obviously I will not say who, what or when, because this was all in confidence.

I’m a reasonable person with a limit to how much crap I’ll take, and that was exceeded almost a year ago. So now you get my unvarnished honest opinions. And if I say so myself, what better person to be listened to than someone who is dedicated to the fundamentals, has shown this for over a decade and continues despite the issues I’ve raised, to put many hours into building things to help the project reach those goals?

I understand I rub people up the wrong way by being frank and honest. But I’m afraid I did enough self censoring last year and it did not help me or the project because what I did say was ineffective.

Would help if this was state on the PR when it was closed. Why make things public and then hide crucial useful information from those who take time out of their lives to follow at this level of detail? I won’t characterise that, but please try to imagine how we experience it.

Again Chris, thanks for engaging but I am not sure there’s much point in you trying to carry all this when it clearly isn’t landing or able to compensate for the lack of communication, clarity and respect that I feel all the time from Autonomi in general. Not from the individuals who do engage, which is always better than I believe is adequate.

1 Like

I’m not sure. Because I do use them, though not the models you use so I realise those may be more effective in the respects you judge them by.

I take a wider view I think than most, certainly those who promote things like ‘vibe coding’ and so on. I’ve proven a pretty good judge of technology and the messaging around it over the decades - and always remain open minded about everything. What I’m not is naive, because of my experience of many new technologies over the years. I was involved in AI back in the eighties, many stages of language development, all in practical settings where we were responsible for evaluating and applying the most effective technology to deliver solutions that were novel in areas that needed to move to a new level.

I’m watching and waiting to see how LLMs develop and I listen to enthusiasts and critics, and I try things out to see. I’m not idealistic though I have ideals and values which go further than asking whether the tool can work on this problem at this time. There’s much more to this area than that, but to be frank, the tech has proven disappointing compared to the hype we heard here a couple of years ago.

I may appear more ‘idealistic’ and critical than I am because I post information that I think is not seen here to balance the ‘debate’ about these tools. Quite a few here use them and more believe they are much more than they are, and appear unaware of the downsides. So I point them out.

I hope you are as able to use them effectively for yourself as you believe. I don’t doubt that you are, I know you are smart and I expect you will learn to use them well as they develop, but I’m also aware there are risks to you and others from them, and these are mostly lost in the hype and misrepresentation we see.

Simply calling an algorithm ‘Claud’ is a deceit, and not realising that or ignoring the effect and intent behind that is problematic. And humans posting their output without announcing it is utterly reprehensible.

3 Likes