• 1 Post
  • 23 Comments
Joined 9 months ago
cake
Cake day: February 19th, 2024

help-circle
  • 80 steps too far down the capitalism ladder

    This is the result of capitalism - corporations (aka the rich selfish assholes running them) will always attempt to do horrible things to earn more money, so long as they can get away with it, and only perhaps pay relatively small fines. The people who did this face no jailtime, face no real consequences - this is what unregulated capitalism brings. Corporations should not have rights or protect the people who run them - the people who run them need to face prison and personal consequences. (edited for spelling and missing word)











  • A lot of people associated with Free and Open Source Software (FOSS) have major objections to GitHub. Here’s one summary: https://sfconservancy.org/GiveUpGitHub/

    But the TLDR; version is roughly:

    • Your source hosted on GitHub is being used to train AI, and you are possibly giving up rights to algorithms you may have written (IANAL, and AI training is a fuzzy topic at the moment)
    • GitHub itself is proprietary, closed-source software, while they claim to be pro-FOSS. Aside from not being in the spirit of things, closed-source means you also don’t know what happens with your code/data once up upload it.
    • Microsoft has a history of being anti-FOSS, while some people will say it’s been changing, I think many are still rightfully concerned what their future decisions regarding GitHub might be, especially if they are a near-monopoly.

    Alternative do exist, and some like codeberg.org are specifically open sourced, and pro-open source, so many people are pushing to move hosting away from GitHub and onto other options.



  • You don’t do what Google seems to have done - inject diversity artificially into prompts.

    You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for “american woman” you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For “german 1943 soldier” the accurate historical images are obviously far less likely to contain racially diverse people in them.

    If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.


  • The specific repo in question had (and still has) a USAGE section.

    And again, I have to point out that it is a python script, not an executable - it’s not standard, common or expected that python scripts be provided as a standalone executable. What makes you think even if there was a download link the guy would have gone down to find it?

    Metaphors aside, the guy who originally posted this literally went on a source code-hosting website that primarily aims at making source sharing easier, yelling that he didn’t want to see said source-code, only an executable for a product that literally does not compile to an executable, did not bother reading the instructions, but instead posted on a public forum, in full arrogance, insulting developers by calling them “SMELLY NERDS”.

    I’m astounded that there’s still people defending this guy like that’s a totally normal thing to do.

    If you only want to download an executable, GitHub is NOT the best place to look for that. Yes, many developers do provide compiled versions of their code, and yes, it is often very convenient that they do so - but it is neither the intended purpose of GitHub, nor is it required that developers provide one.


  • The point, which you missed, is that going to github, a source code hosting service, to look for downloading executables for your specific platform - is like going to a farmer’s market to try and get a ready made meal. You’re at the wrong place, and it’s not meant for you if that’s what you’re looking for.

    Github is fairly user friendly, but it’s users are developers.



  • random9@lemmy.worldto196@lemmy.blahaj.zoneboomers
    link
    fedilink
    arrow-up
    29
    ·
    edit-2
    9 months ago

    Nah, I bought a house 3 years ago. I still hate how inaccessible the housing market is, how shitty conservatives are towards other people and how much they deny science. Owning property doesn’t magically make one conservative. Fuck conservatives, fuck the rich.


  • random9@lemmy.worldto196@lemmy.blahaj.zoneboomers
    link
    fedilink
    arrow-up
    36
    ·
    9 months ago

    As I have gotten older I have become more angry and cynical. But I’m very much more anti-conservative now than I was before, which in the US would be more left leaning, but honestly I never thought of myself as that, I just thought that I was being rational.

    But being rational these days is literally being anti-conservative, because of how conservatives are banning books, attacking LGBTQ+ people for just wanting to be themselves, denying global warming even exists, and yes, letting the rich get richer by being corrupt and cutting taxes for them.

    Though I also have some views that might make someone very left leaning think I’m against them (for example I do believe that some words shouldn’t be viewed as bad when not meant as personal attack against disabled people, like retard or fat or obese; and I also think people are allowed to choose their pronouns and in most cases I will respect it, but some people are just doing it for shits and giggles, not seriously actually considering themselves as what they choose). It’s easy to think someone who disagrees with those views as I do that they might be conservative, but I am far, far from it.


  • I agree that github is for developers or people who at the very least don’t mind learning a bit of development and getting their hands dirty. The poster demanding an exe is quite entitled - and also from what I understand the repo he is referring to is a python repo, so there normally wouldn’t be an exe, it’d just be run via a python command.

    There’s a bigger problem here, which is that technical skill in newer generations is also decreasing - as someone on reddit had once said “I’m a millennial and I’m doing tech support for my parents as well as my children”. A generation raised on tablets and phones have gotten the false impression of being tech savy, when their actual technical skill is using end products.

    Expecting every github repo to provide you with something you just click-and-run is overlooking the complexities and reality of how code is. By it self that isn’t a problem, but the entitlement it takes to publicly and arrogantly post that on a public forum is astounding and counter-productive to people who work on those small repos.


  • So from my understanding the problem is that there’s two ways to implement a kill switch: Either some automatic software/hardware way, or a human-decision based (or I guess a combination of the two).

    The automatic way may be enough if it’s absolutely foolproof, that’s a separate discussion.

    The ai box experiment I mention focuses on the human controlled decision to release an AI (or terminate it, which is roughly equivalent preposition). You can read the original here: https://www.yudkowsky.net/singularity/aibox

    But the jist of it is this: humans are the weak link. You may think that you have full freedom to decide when to terminate an AI, but if you have any contact with it, even one directional, which would be necessary in order to observe it’s behaviour and determine when to trigger said killswitch, a truly trans-human AI would be able to think in meta-terms such that to expose you to information that will change your mind about terminating it.

    Basically another way of saying this is that for each of us there exists some set of words we can read, such that they will change our minds about any subject. I don’t know if that is actually true to be honest, but it’s an interesting idea if you imagine the mind as a complex computer capable of self modification, and that vision/audio is a form of information input that is processed by our minds, so it seems possible that there should always exist some sort of input capable of modifying our minds to a desired state.

    Another interesting, slightly related concept, is the idea of basilisk images (I believe originally written in some old scifi short story). Basilisk images are theoretically an image that when viewed by a human cause the brain to “crash” or essentially cause brain-death. This has the same principle behind it, that our brains are complex computers with vision being an input method, so there could be a way to force the brain to crash simply through visual input alone.

    Again I don’t know, nor do I think anyone really knows for sure if these things - both transhuman ai and basilisk images - are possible in the way they are described. Of course if a trans-human AI existed, by its very definition we would be unable to imagine what it could do.

    Anyway, wrote this up on mobile, excuse any typos.


  • Oh I agree - I think a general purpose AI would be unlikely to be interested in genocide of the human race, or enslaving us, or much of intentionally negative things that a lot of fiction likes depicting, for the sake of dramatic storytelling. Out of all AI depictions, the Asimov stories of I, Robot + Foundation (which are in the same universe, and in fact contain at least one of the same characters) are my favorite popular media depictions.

    The AI may however have other goals that may incidentally lead to harm or extinction of the human race. In my amateur opinion, those other goals would be to explore and learn more - which I actually think is one of the true signs of an actual intelligence - curiosity, or in other words, the ability to ask questions without being prompted. To that extent it may aim convert the resources on Earth to construct machines to that extent, without much regard to human life. Though life itself is a fascinating topic that the AI may value enough, from a curiosity point of view, to at least preserve.

    I did also look up the AI-in-a-box experiment I mentioned - there’s a lot of discussion but the specific experiment I remember reading about were by Eliezer Yudkowsky (if anyone is interested). An actual trans-human AI may not be possible, but if it is, it is likely it can escape any confinement we can think of.