🍜

  • 1 Post
  • 38 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle






  • Much has been said about this already, but I’m really annoyed how they repeatedly try to twist this into a technical question like:

    “This is better for privacy than how it used to be. Here are 20 reasons why, and we have good scientists who say it offers good privacy. Do you have any technical arguments against these privacy claims? We welcome a discussion about possible flaws in the reasoning of the scientists/engineers in terms of assuring privacy.”

    To me, that is a secondary question. More important:

    • Don’t introduce tracking features against my will, with only an opt-out (ironically, while explaining in the same post why opt-outs suck)
    • Give room to a discussion about tracking-based advertisements, whether we want to have that in the internet (IMHO no) and support it in firefox of all browsers (IMHO no)
    • If they go this way, who is supposed to continue using their shit browser after this? The only reason left is that it’s “the reliable other/good browser”. People who don’t care about these questions are using Chrome anyway.

    This is such a self-destructive move, it’s painful to watch.


  • TLDR: Just using an app on your laptop with good filters (newsbeuter!) might be all you need.

    IMHO, RSS readers without decent filters are useless. If you are going to subscribe to even 10, 20 feeds, you will be flooded with articles and have no chance to go through them all. Unfortunately, that already removes 95% of readers from the options.

    A long time ago, I had a TinyTinyRSS setup running. TTRSS offers amazing filters and sorting mechanisms, which made it stand out. For example, I subscribed to several dozens of job recruiting feeds and filtered out everything that didn’t match. You could also add new filters easily. So if you see many job posts for “Twist dancer” and that is not your thing, you can just filter them out and it gets better over time.

    At some point though, TTRSS changed their deployment setup, I think to docker at the time, and I couldn’t be bothered reading up how to set it up back then. Something like that. I also heard that the developer is a Nazi, but this may well be wrong. Both together were somehow enough for me though to drop it and I left the RSS game for a while.

    A few months ago I started again, but this time just on my laptop. Turns out, the main advantage of a server-based version is that you can read stuff on mobile, which I don’t do so much anyway. So first I tried Liferea, which kind of worked but I couldn’t wrap my head around the filter mechanism. It’s supposed to work, but I tried to figure out which part of the code in which exact format to put where exactly. Documentation and error logs suck, and after suffering for 2-3 hours I left it be. Turns out though, Liferea is mostly just a GUI for newsbeuter, and that is where I am now. The filter language is awkward, especially if you have an older version that doesn’t support pretty coding yet (I use Debian, btw). But it works and I’m happy with it now!

    Other than that, although a bit beside your question: Many websites don’t bother including RSS feeds anymore these days, or even removed them to make people look at their ad infested websites. Whichever reader you pick, make sure it easily supports custom RSS feeds. I wrote a little Python script using BeautifulSoup and FeedGenerator to make my own feeds in such cases and newsbeuter can include them easily. There is also this project for that job:

    https://git.sr.ht/~ghost08/ratt

    but I didn’t look into it in detail.


  • Does it?

    Yes, in the sense that “thing moves around and does stuff” becomes more predictable if you assume a certain degree of consciousness. This is easier than “thing is at this position now, was at a different position before, was at yet another position before that”. You reduce some of the complexity and unpredictability by introducing an explanation for these changes of world state. In my world, so far I worked well with the assumption that other humans and animals have some consciousness and at least I’m not aware of any striking evidence that would raise doubt on that.

    The problem with this isn’t that it’s literally unprovable

    Yes, that’s a problem, but it’s relatively similar to the other one. It’s actually quite hard to “prove” anything with real world connection. However, in the case of other humans/animal consciousness, evidence suggests so (at least for me). The evidence in the case of “AI” is different, though. For example, they seem to have no awareness of time and no awareness of the world beyond the limited context of a conversation. Besides a fancy marketing term that suggests there is something similar to living beings involved, what we currently see are admittedly impressive programs that run on statistics, but I don’t need to assume any “consciousness” to explain what they do.


  • Welcome to radical constructivism :) The question whether other people or cats can experience emotions is in fact a problem people have been thinking about quite a lot. Answers are not very satisfactory, but one way to think about it (e.g., some constructivists would do that) is that assuming they do have a conscience simplifies your world model. In the case of “AI” though, we have good alternative explanations for their behavior and don’t need to assume they can experience anything.

    The other important bit is that not assuming some phenomenon exists (e.g., “AI” can experience emotions) unless proven otherwise is the basis of modern (positivistic) science.



  • EFF does some good stuff elsewhere, but I don’t buy this. You can’t just break this problem down to small steps and then show for each step how this is fine when considered in isolation, while ignoring the overall effects. Simple example from a different area to make the case (came up with this in 2 minutes so it’s not perfect, but you can craft this out better):

    Step 1: Writing an exploit is not a problem, because it’s necessary that e.g., security researchers can do that.

    Step 2: Sending a database request is not a problem, because if we forbid it the whole internet will break

    Step 3: Receiving freely available data from a database is not a problem, because otherwise the internet will break

    Conclusion: We can’t say that hacking into someone else’s database is a problem.

    What is especially telling about the “AI” “art” case: The major companies in the field are massively restrictive about copyright elsewhere, as long as it’s the product of their own valuable time (or stuff they bought). But if it’s someone else’s work, apparently it’s not so important to consider their take on copyright, because it’s freely available online so “it’s their own fault to upload it lol”.

    Another issue is the chilling effect: I for one have become more cautious sharing some of my work on the internet, specifically because I don’t want it to be fed into "AI"s. I want to share it with other humans, but not with exploitative corporations. Do you know a way for me to achieve this goal (sharing with humans but not “AI”) in today’s internet? I don’t see a solution currently. So the EFF’s take on this prevents people (me) from freely sharing their stuff with everyone, which would otherwise be something they would encourage and I would like to do.