• 0 Posts
  • 36 Comments
Joined 1 month ago
cake
Cake day: September 27th, 2025

help-circle
  • I don’t share your concerns about the profession. Even supposing for a moment that LLMs did deliver on the promise of making 1 human as productive as 5 humans were previously, that isn’t how for-profit industry has traditionally incorporated productivity gains. Instead, you’ll just have 5 humans producing 25x output. If code generation becomes less of a bottleneck (which it has been doing for decades as frameworks and tooling have matured) there will simply be more code in the world that the code wranglers will have to wrangle. Maybe if LLMs get good enough at generating usable code (still a big if for most non-trivial jobs), some people who previously focused on low-level coding concerns will be able to specialize in higher-level concerns like directing an LLM, while some people will still be writing the low-level inputs for the LLMs, sort of like how you can write applications today without needing to know the specific ins and outs of the instruction set for your CPU. I’m doubtful that that’s around the corner, but who knows. But whatever the tools we have are capable of, the output will be bounded by the abilities of the people who operate the tools, and if you have good tools that are easily replicated, as software tools are, there’s no reason not to try and maximize your output by having as many people as you can afford and cranking out as much product as you can.





  • I think if we’re ever going to find an answer to “Why does the universe exist?” I think one of the steps along the way will be providing a concrete answer to the simulation hypothesis. Obviously if the answer is “yes, it’s a simulation and we can demonstrate as much” then the next question becomes “OK so who or what is running the simulation and why does that exist?” which, great, now we know a little bit more about the multiverse and can keep on learning new stuff about it.

    Alternatively, if the answer is “no, this universe and the rules that govern it are the foundational elements of reality” then… well, why this? why did the big bang happen? why does it keep expanding like that? Maybe we will find explanations for all of that that preclude a higher-level simulation, and if we do, great, now we know a little bit more about the universe and can keep on learning new stuff about it.


  • Yes, kind of, but I don’t think that’s necessarily a point against it. “Why are we here? / Why is the universe here?” is one of the big interesting questions that still doesn’t have a good answer, and I think thinking about possible answers to the big questions is one of the ways we push the envelope of what we do know. This particular paper seems like a not-that-interesting result using our current known-to-be-incomplete understanding of quantum gravity, and the claim that it somehow “disproves” the simulation hypothesis is some rank unscientific nonsense that IMO really shouldn’t have been accepted by a scientific journal, but I think the question it poorly attempts to answer is an interesting one.







  • That’s exactly the sentence that made me pause. I could hook up an implementation of Conway’s Game of Life to a Geiger counter near a radioisotope that randomly flipped squares based on detection events, and I think I’d have a non-algorithmic simulated universe. And I doubt any observer in that universe would be able to construct a coherent theory about why some squares seemingly randomly flip using only their own observations; you’d need to understand the underlying mechanics of the universe’s implementation, how radioactive decay works for one, and those just wouldn’t be available in-universe, the concept itself is inaccessible.

    Makes me question the editors if the abstract can get away with that kind of claim. I’ve never heard of the Journal of Holography Applications in Physics, maybe they’re just eager for splashy papers.




  • A poor architect blames their tools. Serverless is an option among many, and it’s good for occasional atomic workloads. And, like many hot new things, it’s built with huge customers in mind and sold to everyone else who wants to be the next huge customer. It’s the architect’s job to determine whether functions are fit for their purposes. Also,

    Here’s the fundamental problem with serverless: it forces you into a request-response model that most real applications outgrew years ago.

    IDK what they consider a “real” application but plenty of software still operates this way and it works just fine. If you need a lot of background work, or low latency responses, or scheduled tasks or whatever then use something else that suits your needs, it doesn’t all have to be functions all the time.

    And if you have a higher-up that got stars in their eyes and mandated a switch to serverless, you have my pity. But if you run a dairy and you switch from cows to horses, don’t blame the horses when you can’t get milk.






  • Sure have. LLMs aren’t intrinsically bad, they’re just overhyped and used to scam people who don’t understand the technology. Not unlike blockchains. But they are quite useful for doing natural language querying of large bodies of text. I’ve been playing around with RAG trying to get a model tuned to a specific corpus (e.g. the complete works of William Shakespeare, or the US Code of Laws) to see if it can answer conceptual questions like “where are all the instances where a character dies offstage?” or “can you list all the times where someone is implicitly or explicitly called a cuckold?” And sure they get stuff wrong but it’s pretty cool that they work as well as they do.