• 0 Posts
  • 26 Comments
Joined 4 months ago
cake
Cake day: May 29th, 2024

help-circle



  • I agree to some extent, as there are plenty of distros that don’t do anything significantly different from each other and don’t need to exist. I also see what you mean about desktop environments. While I think there’s space for all the small exotic window managers that exist, I would say we probably don’t need as many big fully integrated desktop environments as there are now. (Maybe we should have only one aimed at modern hardware and one designed to be lightweight.)

    That being said, there is plenty of duplication of effort within commerical software too. I would argue that if commercial desktop GUIs currently offer a better user experience than Linux desktop environments it’s more in spite of their development model than because of it, and their advantage has mostly to do with companies being able to pay developers to work full time (instead of relying on donations and volunteers).

    There are a couple reasons I think this:

    • In a “healthy” market economy there needs to be many firms that offer the same product / service. If there is only a small number (or, worse, only one) that performs the same function the firm(s) can begin to develop monopolistic powers. For closed source software development this necessitates a great deal of duplicated effort.
    • The above point is not a hypothetical situation. Before the rise of libre software there were a ton of commercial unices and mainframe operating systems that were all mostly independently developed from each other. Now, at least when it comes to running servers and supercomputers, almost everyone is running the same kernel (or very nearly the same) and some combination of the same handful of userspace services and utilities.
    • Even as there is duplication of effort between commercial firms, there is duplication of effort and wasted effort within them. For an extreme example look at how many chat applications Google has produced, but the same sort of duplication of effort happens any time a UI or whole application is remade for no other reason than if the people employed somewhere don’t look like they’re working on something new then they’ll be fired.
    • Speaking of changing applications, how many times has a commercial closed source application gone to shit, been abandoned by the company that maintains it, or had its owning company shut down, necessitating a new version of the software be built from scratch by a different firm? This wastes not only the time of the developers but also the users who have to migrate.

    Generally I think open source software has a really nice combination of cooperation and competition. The competition encourages experimentation and innovation while the cooperation eliminates duplicated effort (by letting competitors copy each other if they so choose).


  • I vibe with this a lot. I don’t think the movie needed to exist in the first place, and if it did it would probably be better if it were fully animated, but nothing about the trailer provoked any strong emotions in me.

    I’m not going to watch it but I also didn’t go “wow this is an insult and a tragedy”.

    I guess I’m happy for all the tiny children that are gonna watch it and probably love it though.



  • This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

    I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don’t learn using stochastic gradient descent with back-propagation). But to say that there’s nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

    If you read books such as V.S. Ramachandran and Sandra Blakeslee’s Phantoms in the Brain or Oliver Sacks’ The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they’ve done it. They’ll tell you what they’ve made up while believing that they’re telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

    It is V.S Ramachandran’s belief that there are two processes that occur in the Brain, a confabulator (or “yes man” so to speak) and an anomaly detector (or “critic”). The yes-man’s job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic’s job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

    I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

    What implications does that have on copyright law? I don’t know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

    There’s a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they’re really just whatever works in practice. So, what I’m trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.




  • While I agree that it’s somewhat bad that there is no distinction between lossless and lossy jxl in the file extension, I think it’s really not a big deal compared to the present situation with jpg/png.

    The reason being that if you download a png file you have no idea if its been converted from jpg, if it’s a screenshot of a jpg, or if it’s been subjected to lossy reencoding by a tool or a website upload process.

    The only thing you can really do to try and see if the file you’ve downloaded has suffered encoding loss is to do an image search on it and see if there are any better quality versions out there. You’d do the exact same thing with a jxl file.





  • But the fact that even just a single rail car holds 360 commuters, equivalent to 180 cars or more on the highway changes the math completely.

    Absolutely. The fact that 3 million people pass through Shinjuku station every day is a testament to that.

    If all of those people lived in a city in the US it would be the country’s third largest, behind NY and LA. (If we’re going by the entire urban area instead of just within city limits it would be the 20th, just ahead of the Baltimore-Columbia-Towson metropolitan statistical area.)

    All in a space that’s smaller than most highway interchanges.

    And that’s not even using two-level train cars (which is where your figure for 360 people per train car comes from I think?).


  • While things like merging movements and so on is part of the story, it’s not the whole story.

    You see, by saying “traffic jams are caused by merging mistakes and so on” it kinda implies that if everyone drove perfectly a highway lane could carry infinitely many cars. In actually a highway lane has a finite capacity determined by the length of the vehicles traveling on it, the length of the gap between them (indirectly determined by how fast they can start and stop), and the speed they’re moving.

    There are finite limits for gap widths and speed determined by physics and geometry. As the system approaches these limits it becomes less and less able to deal with small disruptions. In other words, as more cars move on a freeway a traffic jam becomes more and more likely. The small disruption which is perceived as the cause was really just the nucleation point for a phase change that the system was already poised to transition through. If it wasn’t that event then something else would trigger it.

    It is interesting to note that once a highway has transitioned from smooth flow to traffic jam its capacity is massively reduced, which you can see in the graphs in the above link. Another interesting thing to note is that the speed vs volume graph, if you flip it upside down, resembles a cost / demand curve from economics, where volume is the demand and time spent commuting (the inverse of speed) is cost. If you do this you see something quite odd, which is that the curve curls up around itself and goes backwards.

    This is less like a normal economic situation (the more people use a resource the more they have to pay, the less people use it the less they have to pay) and more like a massively multiplayer version of the prisoner’s dilemma. For awhile the cost increases only slightly with growing demand, until a certain threshold where each additional actor making a transaction has a chance to massively increase the cost for everyone, even if consumption is reduced. Actors can choose to voluntarily pay a higher time cost (wait before getting on the freeway) to avoid this, but again, it’s the prisoners dilemma. People can just go, trigger a traffic jam anyway, and you’ll still have to sit through it + all the time you waited trying to prevent it.

    Self driving cars are often described as a way to eliminate traffic jams, but they don’t change this fundamental property of how roadways work. It’s true that capacity could potentially be increased somewhat by decreasing the gap between cars, since machines have faster reflexes than humans (though I’m skeptical of how much the gap can really be decreased; is every car going to weigh the same at all times? Is every car going to have tires and brakes in identical conditions? Is the condition of the asphalt going to be identical at all times and across every part of the roadway? All of these things imply a great deal of variability in stopping distance, which implies a wide safety gap.), but the prisoner’s dilemma problem remains. The biggest thing that self driving cars could actually do to alleviate traffic jams would be to not enter a highway until traffic volumes were at a safe level. This can also be accomplished with a traffic volume sensor and a stop light on highway on-ramps.

    Of course trains, on top of having a way higher capacity than a highway lane, don’t suffer from any of this prisoner’s dilemma stuff. If a train car is full and you have to wait for the next one that’s equivalent to being stopped at a highway on ramp. People can’t force their way into a train and make it run slower for everyone (well, unless they do something really crazy like stand in the door and stop the train from leaving).



  • CRI is defined as how closely a light source matches the spectral emission of a thing glowing at a specific temperature. So, for a light source with a 4000 k color temperature its CRI describes how closely its emission matches that of an object that’s been heated to 4000 k.

    Because incandescent bulbs emit light by heating a filament by definition they will have 100 CRI and its impossible to get any better than that. But the emission curve of incandescent lights doesn’t actually resemble that of sunlight at all (sorry for the reddit link). The sun is much hotter than any incandescent bulb and it’s light is filtered by our atmosphere, resulting in a much flatter more gently sloping emission curve vs the incandescent curve which is extremely lopsided towards the red.

    As you can see in the above link, there are certain high end LED bulbs that do a much better job replicating noon day sunlight than incandescents. And that flatter emissions profile probably provides better color rendering (in terms of being able to distinguish one color from another) than the incandescent ramp.

    Now, whether or not you want your bulbs to look like the noon day sun is another matter. Maybe you don’t want to disrupt your sleep schedule and you’d much rather their emissions resemble the sunset or a campfire (though in that case many halogen and high output incandescent lamps don’t do a great job either). Or maybe you’re trying to treat seasonal depression and extra sunlight is exactly what you want. But in any case I think CRI isn’t a very useful unit (another reddit link).




  • There is already a Chinese EV that uses a sodium ion battery, the JMEV EV3.

    It’s a tradeoff of range vs price. The EV3 only has 155 miles of range, but thanks in part to its sodium ion battery it costs only $9220 new. Which is a price that will probably drop even more as more sodium ion plants come online and economies of scale kick in.

    EDIT: even if your commute is 40 minutes long, driving 60 MPH the entire way, that range is enough to get you to work and back using a little more than half your charge. Given that it’s also generally cheaper to charge an EV than pump gas, and there’s less maintenance costs, I think there’s absolutely a market for such a car.