• 19 Posts
  • 3.48K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • Try limiting your PCIe speed to 3.0 (just for now) in the BIOS.

    It’s a longshot for you, but I had similar issues when I tried to run PCIe too fast over a riser. It’s theoretically possible your motherboard is borked and can’t support the higher PCIe speed of the 9070.


    ALSO, update your mobo BIOS!

    Can’t emphasize this enough. If you’re having some weird hardware issue, try updating it.


  • If you’re wondering about Fedora vs CachyOS, it comes down to what you do on your PC. And what you’re used to.

    If you want better “preconfiguration” for graphics stuff, CachyOS is the way to go. With Fedora you will end up referencing and maintaining a whole lot more yourself, while the CachyOS maintainers basically do all that maintinance and config optimization for you.

    But Fedora might be better for a less GPU-focused “workstation” type system.

    Generally, I’d look at the “style” and interests of distro maintainers. CachyOS is built by a collective of linux gaming/compute enthusiasts that snowballed into popularity, though it does inherit all the work from Arch. Fedora is a long standing workstation/server workhorse, a “pre release” for Red Hat enterprise linux.









  • I mean, even as-is, it’s a very useful tool. Especially as the capabilities we have get exponentially cheaper.

    What people don’t get is AI is about to become a race to the bottom, not to the top. It’s a utility to sift through millions of documents, or run simple bots, or operate work assistants, or makeshift translators or whatever; you know, oldschool language modeling. And that’s really neat as the cost approaches “basically free.”

    Basically, imagine running Claude Code on your iPhone, and Claude Code itself not really changing all that much. Imagine the economic implications for the big AI houses.

    As for the marketing, I want some of what those tech execs are smoking.




  • brucethemoose@lemmy.worldtoLemmy Shitpost@lemmy.world..?
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    3 days ago

    On the contrary, to quote Wikipedia’s article on him:

    https://en.wikipedia.org/wiki/Larry_Sanger

    Since Sanger’s departure from Wikipedia, he has been critical of the project, describing it in 2007 as being “broken beyond repair”.[8] He has argued that, despite its merits, Wikipedia lacks credibility and accuracy due to a lack of respect for expertise. Since 2020, he has also accused Wikipedia of having a left-wing and liberal ideological bias in its articles.[9][10] Sanger’s effort to change Wikipedia was seen by some as part of a right-wing attack on Wikipedia.[11][12]

    Which lines up with external articles written by him that I’m reading, including his take on Grokipedia: https://larrysanger.org/2025/10/grokipedia-a-first-look/

    Of course Wikipedia is biased (probably western/liberal biased), astroturfed, and such. I was taught that in middle school. But calling it something like a CIA misinformation op falls into “perfect is the enemy of good” at best, and sounds more like the efforts of bad actors trying to tear down information sources they can’t control.

    I really don’t like that.

    For all its flaws, Wikipedia is one of the last “free” information oases on Earth. It’s critical to humanity’s survival. It should improve, yes, but we don’t need any more conspiracies undermining it.


  • So the next consoles would be cloud/streaming consoles only.

    They very well could be.

    The hardware is near-identical though, or at least it was for PS Now. So the barrier to re-use game streaming hardware for a physical console is fairly low.

    I think you’re being quite a bit disingenuous here. AMD hasn’t made a “highest end GPU variant” in a literal decade. They’ve never had a competitor to the Titan cards nor the *90 variants, and with the *80 variant slowly taking over the top-end consumer spec(because the *90 took over the TItan classification), all of this isn’t because of AI. It’s just AMD lagging behind the entire time. And I love AMD, but they’ve never been known for highest end. And Intel has NEVER made a highest end GPU variant. So not sure where that claim is coming from.

    It’s about silicon size to me. Even if a bit behind Nvidia’s mega dies, AMD made “big die” cards consistently, like the 6970, 7970, 290, Fiji, Vega 64, the 6900, 7900 XTX. But the 9000 series is different. The top-end 9070 XT is “only” 356.5 mm2 and 256-bit; a mid-range size. The only recent precedent for that is the RX 480, but those were cheaper and sold alongside higher end GPUs.

    And with Arc Battlemage, Intel allegedly had a bigger die in the works, but canceled it. Presumably because they didn’t think it was financially viable.


    You make fair points. I’m probably panicking and being a little dramatic here… Custom SoCs would probably be questionable if regular graphics are.

    But I still don’t like the trajectory. It feels like AMD/Intel are struggling to even stay alive in the space, while Nvidia seems to think it’s not so important, and I don’t like where that goes.


  • Others have explained it well; splitting calls up into parallel subjobs, and programatic prompt engineering.

    And what is the non theortical limit of AI?

    Shrug.

    But practically, transformer models are kinda hitting an “innovation” wall. Big companies aren’t taking risks to try and fix (say) the necessity of temperature to literally randomize outputs, or splitting instructions/context/output, or self correction (like an undo token), adaptation on the fly, anything.

    All this has been explored in research papers, yet they aren’t even trying it at larger scales. They’re simply scaling up what they have, or (in the case of the Chinese labs) focusing on lowering resource usage.

    Basically, corporate LLM development is far more conservative than you’ve been lead to believe, and that’s the wall LLMs are smacking into.