• 0 Posts
  • 37 Comments
Joined 4 months ago
cake
Cake day: May 29th, 2024

help-circle



  • I agree to some extent, as there are plenty of distros that don’t do anything significantly different from each other and don’t need to exist. I also see what you mean about desktop environments. While I think there’s space for all the small exotic window managers that exist, I would say we probably don’t need as many big fully integrated desktop environments as there are now. (Maybe we should have only one aimed at modern hardware and one designed to be lightweight.)

    That being said, there is plenty of duplication of effort within commerical software too. I would argue that if commercial desktop GUIs currently offer a better user experience than Linux desktop environments it’s more in spite of their development model than because of it, and their advantage has mostly to do with companies being able to pay developers to work full time (instead of relying on donations and volunteers).

    There are a couple reasons I think this:

    • In a “healthy” market economy there needs to be many firms that offer the same product / service. If there is only a small number (or, worse, only one) that performs the same function the firm(s) can begin to develop monopolistic powers. For closed source software development this necessitates a great deal of duplicated effort.
    • The above point is not a hypothetical situation. Before the rise of libre software there were a ton of commercial unices and mainframe operating systems that were all mostly independently developed from each other. Now, at least when it comes to running servers and supercomputers, almost everyone is running the same kernel (or very nearly the same) and some combination of the same handful of userspace services and utilities.
    • Even as there is duplication of effort between commercial firms, there is duplication of effort and wasted effort within them. For an extreme example look at how many chat applications Google has produced, but the same sort of duplication of effort happens any time a UI or whole application is remade for no other reason than if the people employed somewhere don’t look like they’re working on something new then they’ll be fired.
    • Speaking of changing applications, how many times has a commercial closed source application gone to shit, been abandoned by the company that maintains it, or had its owning company shut down, necessitating a new version of the software be built from scratch by a different firm? This wastes not only the time of the developers but also the users who have to migrate.

    Generally I think open source software has a really nice combination of cooperation and competition. The competition encourages experimentation and innovation while the cooperation eliminates duplicated effort (by letting competitors copy each other if they so choose).


  • I vibe with this a lot. I don’t think the movie needed to exist in the first place, and if it did it would probably be better if it were fully animated, but nothing about the trailer provoked any strong emotions in me.

    I’m not going to watch it but I also didn’t go “wow this is an insult and a tragedy”.

    I guess I’m happy for all the tiny children that are gonna watch it and probably love it though.



  • This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

    I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don’t learn using stochastic gradient descent with back-propagation). But to say that there’s nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

    If you read books such as V.S. Ramachandran and Sandra Blakeslee’s Phantoms in the Brain or Oliver Sacks’ The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they’ve done it. They’ll tell you what they’ve made up while believing that they’re telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

    It is V.S Ramachandran’s belief that there are two processes that occur in the Brain, a confabulator (or “yes man” so to speak) and an anomaly detector (or “critic”). The yes-man’s job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic’s job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

    I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

    What implications does that have on copyright law? I don’t know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

    There’s a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they’re really just whatever works in practice. So, what I’m trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.


  • It’s going to get harder and harder to do that as cellphones get better though.

    iPhones already have satellite SOS feature which works worldwide, and are starting to roll out satellite texting for non-emergency use. There are a few Android models that are slated to do the same, and it’s only a matter of time before most phones can do this.

    There are plenty of phones that are waterproof (or rated for submersion in 5 meters of water for 30 minutes or whatever) and that’s only going to become more common too.

    My phone lasts for about 2 days on a charge with how much I use it, and I charge it every night. That’s only going to get better with better battery technologies (the trend of phones getting thinner in response to increased battery capacity has actually somewhat reversed in recent years).

    So, in a classic horror movie scenario with 5 or so people they’d need a reason why every single person is out of charge or has their phone broken. Even if the protagonists can’t get themselves out of the situation they’re in using their phones (because they’re broken or whatever) you still need to answer how they got into that situation in the first place if they have offline maps and GPS navigation. That’s not as big of a problem but it eliminates “they got lost” as a premise for why they’re in some spooky woods or wherever.

    It seems to me that you’d either need to set the story in an abandoned mine or make the antagonist explicitly supernatural.




  • CRI is defined as how closely a light source matches the spectral emission of a thing glowing at a specific temperature. So, for a light source with a 4000 k color temperature its CRI describes how closely its emission matches that of an object that’s been heated to 4000 k.

    Because incandescent bulbs emit light by heating a filament by definition they will have 100 CRI and its impossible to get any better than that. But the emission curve of incandescent lights doesn’t actually resemble that of sunlight at all (sorry for the reddit link). The sun is much hotter than any incandescent bulb and it’s light is filtered by our atmosphere, resulting in a much flatter more gently sloping emission curve vs the incandescent curve which is extremely lopsided towards the red.

    As you can see in the above link, there are certain high end LED bulbs that do a much better job replicating noon day sunlight than incandescents. And that flatter emissions profile probably provides better color rendering (in terms of being able to distinguish one color from another) than the incandescent ramp.

    Now, whether or not you want your bulbs to look like the noon day sun is another matter. Maybe you don’t want to disrupt your sleep schedule and you’d much rather their emissions resemble the sunset or a campfire (though in that case many halogen and high output incandescent lamps don’t do a great job either). Or maybe you’re trying to treat seasonal depression and extra sunlight is exactly what you want. But in any case I think CRI isn’t a very useful unit (another reddit link).



  • There is already a Chinese EV that uses a sodium ion battery, the JMEV EV3.

    It’s a tradeoff of range vs price. The EV3 only has 155 miles of range, but thanks in part to its sodium ion battery it costs only $9220 new. Which is a price that will probably drop even more as more sodium ion plants come online and economies of scale kick in.

    EDIT: even if your commute is 40 minutes long, driving 60 MPH the entire way, that range is enough to get you to work and back using a little more than half your charge. Given that it’s also generally cheaper to charge an EV than pump gas, and there’s less maintenance costs, I think there’s absolutely a market for such a car.