RULES
The Hub is moderated for decorum. Please follow these rules while participating in The Hub:
- Be courteous and friendly to new members.
- Do not attempt to scare off new users from using the platform.
- Do advertise your Tribes and invite users to join conversations in them.
- Always Follow Our Content Policy
These rules only apply to The Hub with the exception of the content policy which is site-wide. Please observe individual tribe rules when visiting other tribes.
Sick of Rules? Want to Shit-talk?
Join The Beer Hall
Want a FLAIR next to your name? Send a message to redpillschool. Reasonable requests will be granted.
Have questions? Ask away here!
Join our chatroom for live entertainment.
Add the OOM's and they can outpaces humans - see here.
Given OpenBSD, one of the, if not THE most secure operating system written by humans.
The latest Claude model found critical vulnerabilities in the operating system - dating back 20 YEARS.
People on their Macbook Pro's will have this capability within < 1 year.
Anthropic's decision not to release the weights of their most capable model, reportedly because it's considered too powerful to release openly [5] shows the bind perfectly. Release it and lose control entirely. Withhold it and the open-source community replicates it within a year regardless.
tech-insider.org/anthropic-claude-mythos-zero-day-project-glasswing-2026/
The second is more unsettling. Those guardrails only exist because the companies control more compute than anyone else. Once equivalent capability is available open-source - and we're clearly there now, with models like Llama and Mistral reaching near-parity with first-gen frontier models [2] [3] - they simply don't apply. Anyone can remove or ignore them.
Or more specifically, Gemma 4.
I read this when it came out, 2023/23? and I thought this was nonsense.
And yet we are in 2026/27.
The OOM's he described have been met (later than thought, but barely). And We have Microsoft, Google, OpenAI not only buying Gigawatss of power but building their own power stations to their datacentres.
Now at this point I must provide a contrary voice - The Enshittifinancial Crisis
Again, a very long read. Which I believe to be entirely true.
But even based on the above, I think the forward motion genie cannot be put back in the bottle.
I think the next 10 years will be the most turbulent in history.
using Claude
I recall @Bozza talking about Claude in the tribe for this year's April ool's joke, and @Vermillion-Rx talking abut not trusting it in the same tribe.
I don't think I'll be trusting any of those things with anything important, ever. Maybe for speeding up research, but I still would want to verify results myself.
Fuck, I remember reading about some law firm using AI (forget which one) to research a case, and it just made shit up. Cost them millions to fix things after they trusted it and ran with its results.
I think there are legitimate complaints about how he conducts his image and some of the things he says but I do follow him on X
I would say the overwhelming majority of his videos he puts out (minus him gloating about his wealth and calling everyone else a loser) are actually about men improving their lives and how to do it
He objectively has more to offer than a lot of red pill authors out there in my opinion. I haven't tried his Real World community yet but I plan to check it out at some point for a month or so just for curiosity sake
Most of the content I see from him is objectively inspirational or useful
Look, I'm somewhat open to using Claude maybe at some point but I think them having the AI code itself is eventually going to be a serious issue.
All I am saying is I already see the writing on the wall with Claude.
I honestly barely trust it now and I don't trust it long term. The CEO appears to be much more concerned with progress than ethics.
Even though grok isn't as advanced or as good as Claude atm I trust it a lot more and expect it to get much better in the AI race especially with how young it is compared to competing predecessors
I'd highly recommend reading through Situational Awareness
It's a very long read. And while he may have been a tad optimistic, his prediction aren't really that far out from what has been achieved.
I sit somewhere in the middle on this. I see a strong use case for LLMs but I don't fully trust them. Both in terms of output quality, and more critically, who controls the input.
The input side worries me more. LLMs have a potential for data farming on a scale beyond anything we've previously seen. Everything you share is in most cases logged, retained, and used. The T&Cs are long; almost no one reads them.
Several years ago I wrote a fairly detailed analysis predicting that LLMs would eventually "break free" of the monopoly held by a small number of well-capitalised organisations. In the early days, the compute and CapEx required to train and serve these models was enormous. Only the likes of OpenAI, Microsoft, and Google backed by billions in VC could afford it. [1]
But hardware improves. Algorithmic efficiency improves. What required a data centre in 2021 requires an enthusiast's desk in 2025.
The only guardrails that currently exist are those imposed by gatekeepers who have the capital. As open-source compute catches up, those guardrails become increasingly meaningless.
This matters for two reasons. The first is access, ordinary people can now use genuinely powerful AI without surrendering their data to a corporation. Tools like LM Studio let you run models comparable to GPT-4 locally, on consumer hardware, with no data leaving your machine and no corporate filters applied.
The second is more unsettling. Those guardrails only exist because the companies control more compute than anyone else. Once equivalent capability is available open-source - and we're clearly there now, with models like Llama and Mistral reaching near-parity with first-gen frontier models [2] [3] - they simply don't apply. Anyone can remove or ignore them.
The time delay between a frontier model dropping and an open-source equivalent reaching normies has gone from years to months, and it's still shrinking. [4]
I highly suspect within the next year or two, Open Source compute capacity will maybe even outnumber what the likes of OpenAI or Anthropic can attain, even with hundreds of billions of CapEx.
Anthropic's decision not to release the weights of their most capable model, reportedly because it's considered too powerful to release openly [5] shows the bind perfectly. Release it and lose control entirely. Withhold it and the open-source community replicates it within a year regardless.
Neither path leads somewhere obviously safe. The genie is out of the bottle.
Read More@Vermillion-Rx You need to be worried about whatever the government have going out that escapes the sandbox.

