Aye, chust sublime, but did she say it in the Gaelic?
I have a grandfather that is far into his 90s so I’m hoping to get this tech done before he passes on….LOL.
I’m making a lot of headway in future proofing this, so no matter what planet you reincarnate on, you can re-authenticate yourself into the system using zero knowledge proofs.
The Discourse ‘like’ feature has become onerous so forgive me if I don’t dole many out from here on. Value them more highly!
It takes two clicks and a wait of several seconds for the pop-up, and again to see I clicked it correctly.
On Firefox Android I used to be able to just click ‘like’ and forget because the pop-up didn’t stay up and it took the like, but with Brave I have to wait and click a second time, then wait to see it has happened. I can’t go back to Firefox though.
17 posts were merged into an existing topic: American Politics
The state of media and contrived narratives…
Was it Jon Stewart that called out Tucker previously?.. perhaps a spark or perhaps media is just that bad that’s its impossible to deny there is something broke.
Nice. This isn’t just the US, but the entire world’s oligarch owned media & governments that Tucker is talking about here:
"Our current orthodoxies won’t last. They’re brain-dead. Nobody actually believes them. Hardly anyone’s life is improved by them. This moment is too inherently ridiculous to continue, and so it won’t.
The people in charge know this, that’s why they’re hysterical and aggressive. They’re afraid. They’ve given up persuasion - they’re resorting to force. But it won’t work. When honest people say what’s true, calmly and without embarrassment, they become powerful. At the same time, the liars who’ve been trying to silence them shrink - and they become weaker. That’s the iron-law of the universe; true things prevail."
You can see this in how governments around the globe are getting much more aggressive in their words, legislation, and actions the past decade.
Through some magic or more likely admin tweaking normal like service is resumed hoorah!
It is back to broken, maybe some JS didn’t load on my last visit.
One-click like on the above
RIP
Some few interesting bits in here.
What’s up today is:
Markdown Tutorial - Introduction 1
If x = 4, and two ducks walk out onto the road, splat.
Reasons to look at this tutorial
-
It’s:
- interactive
- fun
- for all ages
- full of indented sublists
What better would you have to be doing?
- Haven’t you dreamed of escaping from ‘+’, ‘*’, and ‘-’?
- All those years of sub-par forum writing can be washed away, it’s never too late
A six-hash heading. That’s a lot of hash2,3
The Clincher
Don’t forget @neo’s great guide on how to use the “hide details” function recently, too. A great way to let out your verbosity without feeling guilty (you can also just not feel guilty).
1 Private tutoring available on request, only accepts payment in MAID
2 So academic these footnotes
3 Hash sold seperately
EDIT: thing I just discovered - if you see a post with some fancy formatting and think “Wait, what? How’d they do that?”, and if the post has been edited you can see the actual pre-marking-up text by doing this:
- Click on the little editing pencil
- Click on “Raw” in the top right
- Enjoy peaking under the hood
This post has now been edited, so you can test that here
EDIT 2: I nested my sublists, and added a code block. It’s all here, practically.
Jerry! Jerry! Jerry!
rip
Go on, Guix hackers, get that bootstrappability
Do you use Guix? I tried Nix a while back but got stuck.
Yes, a couple of years now, it’s very stable, and the documentation is great. When they say “advanced GNU distribution” they make it sound daunting, but actually I only knew how to do extremely basic software uploads and downloads and system updates for well over a year and I’d no issues. The graphical installer is as easy as any other I’ve used, too, and you’ve a choice of 8 or 9 DEs/WMs from the off.
I tried it out initially for the unadulterated commitment to software freedom back when I didn’t really understand what that meant, and I stayed for the documentation culture and the general excellence of the experience.
I recently wanted a package they didn’t have in the repositories, and then went and packaged it myself. It was accepted a few weeks back, so I can say from personal experience that they do a serious job of bridging the gap between new user and active contributor, it happened inexorably, with no big “push” on my side, just kept poking around the documentation when I had questions, reading a blog here and there.
I never tried Nix, but Guix is fond of and thankful to them for paving the way. The difference between the two, from reading around, is the commitment to software freedom, the commitment to extensive documentation (there’s the manual, but also the guix cookbook which is fabulous, it’s translated to lots of languages, etc), and the fact that you tie everything together with the Guile implementation of Scheme, a full programming language.
Last (practical) point: if you are running something pacman-based or apt-based, you can have guix (the package manager) on top of that. Here’s a quick description of that, I haven’t done it but it could be only an install away.
Then, to test it out, one of the immediately useful tools you’d have is guix shell
, which can be used for spinning up lightweight isolated containers. Here’s a little article on that:
https://gexp.no/blog/hacking-anything-with-gnu-guix.html
Edit: the link to the “Installation” page above is a link to the online version of the Guix manual with a description of how installation on a “foreign distro” works, and not a scary link that randomly downloads anything. Just to be clear!
Just kidding: we all know size matters. It’s definitely true for AI models, especially for those trained on text data, i.e., language models (LMs). If there’s one trend that has, above all others, dominated AI in the last five or six years, it is the steady increase in parameter count of the best models, which I’ve seen referred to as Moore’s law for large LMs. The GPT family is the clearest — albeit not the only — embodiment of this fact: GPT-2 was 1.5 billion parameters, GPT-3 was 175 billion, ~100x its predecessor, and rumors have it that GPT-4’s size, although officially undisclosed, has reached the 1 trillion mark. Not an exponential curve but definitely a growing one.
OpenAI was categorically following the godsend guidance of the scaling laws they themselves discovered in 2020 (that DeepMind later refined in 2022). The main takeaway is that size matters a lot. DeepMind revealed that other variables like the amount of training data, or its quality, also influence performance. But a truth we can’t deny is that we love nothing more than a bigger thing: Model size has been the gold standard for heuristically measuring how good an AI system would be.
OpenAI and DeepMind have been making their models bigger over the years in search of hints from the performance graphs, signs in the benchmark results, or whispers from the models themselves, of an otherwise merely hypothetical path toward AGI, the field’s holy grail. They didn’t find what they were looking for. Instead, they got predictable — although, if you ask me, impressive — improvements in language mastery, that sadly don’t reveal any clear direction toward the next stage.
Size has proven, as they predicted, critical, but it seems companies have practically exhausted the “scale is all you need” doctrine. What’s most striking, the acknowledgment of this new reality doesn’t come from a classical AI proponent or a deep learning critic but from OpenAI’s CEO, Sam Altman himself.
That era went by fast.