Dear Linux community,

In these unpredictable and often challenging times, I feel it’s more important than ever to pause and share heartfelt wishes. Merry Christmas to each and every one of you!

Let this holiday season be a moment of peace, where you can step back, breathe, and find some calm amidst the chaos. Take the opportunity to reconnect, reflect, and perhaps even find inspiration for the year ahead.

May your days be filled with joy, your systems stay secure, and your kernels remain stable. Here’s to a festive season full of positivity and open-source spirit!

Warm wishes,

Your fellow penguin at heart.

P.S.: I had very little time, so the whole thing, was AI accelerated! Please forgive me :-)

  • davel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 minutes ago

    OP, please don’t post any more AI slop so I don’t get flooded with reports from users.

      • fool@programming.dev
        link
        fedilink
        arrow-up
        11
        arrow-down
        4
        ·
        edit-2
        22 hours ago

        edit: Please be nice to each other! :(

        Lots of downvotes in this reply chain. Not to be a “I don’t wanna be either side” kinda guy but AI isn’t all bad and isn’t all good either. (Greys!)

        Merry Christmasing should be a genuine hug. Even if this was made by a homegrown open-weight open-dataset inference model, it’s nearly 100% low-effort generated – holidays need the human aspect, no? Covering yourself up too much in AI takes away from the humanness with corporate diction, and people need evidence of risktaking genuineness nowadays.

        On the other hand, AI is definitely useful… but elsewhere. It’s not strictly anti-human even if conglomerates are using it that way, which I think you agree on. Wading through HOA using local NLP setups is human. Looking through a Mandarin thread when typical translation sucks, is human.

        But there are domains for its use and there is ethical stuff to work on. This post just doesn’t fit the domain too well, as others agree…

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          5
          arrow-down
          14
          ·
          edit-2
          3 hours ago

          This is irrational and reactionary.

          It’s a take based on an appeal to nature and noble savage fallacies propogated by neolibs and marketing teams making the “organic” play with their products.

          AI is just a tool, it’s misuse by Corpos is something we all agree is bad, but by themselves tools do not make you or anyone else less human, nor do glasses make those of poor sight less human etc.

          How the absurdity of making such a statement on a sub defined by an identity tied to a computer operating system doesn’t make you re-think that claim is beyond me.

          Be better.

          Edit: and no it’s pointless to argue, nobody changes their minds, I’m just posting stuff like this so assholes I would like to have blocked come out and make themselves known as they always do.

          • Kena@lemm.ee
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            6
            ·
            22 hours ago

            Your 2nd paragraph is just a lie made up in your own head.

            AI isn’t a tool, a brush is. A brush or any other tool lets humans create. an IGA (image generation algorithm) often mislabelled as AI doesn’t let you create instead it TAKES what other humans already had created and shushes it together without regard for any artistic expression, since as a computer program, it’s incapable of art. It’s plagiarism with extra steps.

            Ai as a thing only exists today because it’s in the interest of corporations to replace humans with machines in order to funnel more wealth to the owning class.

            However in a society with class equality AI would be worthless. It cannot “make” anything without someone else already having made that thing first, it’s a waste of time and an erosion to society, a cancer.

            You will never change because simply put, you’re not smart enough to. You’re incapable of understanding how and why you’re wrong because you lack the intelligence, education and desire to grow.

  • SavvyWolf@pawb.social
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    6
    ·
    1 day ago

    Nothing says “Linux” more than paying a megacorp to steal the hard work of artists…

    • moreeni@lemm.ee
      link
      fedilink
      arrow-up
      4
      arrow-down
      24
      ·
      1 day ago

      Could they? What if they don’t have the skills to draw it themselves?

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        25
        arrow-down
        4
        ·
        edit-2
        1 day ago

        A text post without a picture would achieve the same result just fine, without stealing work from artists and regurgitating it into a soulless, artifact-ridden and generally ugly picture.

      • fool@programming.dev
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 day ago

        I disagree with this sentiment; I’m inclined to believe that AI has actually lowered the bar for meaning.

        Before AI, typically only skilled artists drew pictures for the web. But now that AI is making art that’s less meaningful than crayon pictures, there’s the growing sentiment of

        I’d rather see a crayon picture than AI slop.

        which could actually mean more people have the ability to go on and artify.

        Of course this is anecdotal; it’s the reason I started drawing again :)

        • moreeni@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          22 hours ago

          This is selective memory at best. There’s a lot of so-called art by real humans and text wishes that are way way worse than what OpenAI’s algorithms produce.

          • fool@programming.dev
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            22 hours ago

            I’m not sure I agree but I’m happy to discuss! :)

            Why are you calling my statement “selective memory” (am I intentionally excluding something?), and what do you mean by “way worse”? Do you consider unskilled art as not art at all (i.e. “so-called”)?

            What I was trying to say, is that on social media, skilled artists formerly dominated attention (likes, upvotes) because viewers wanted well-constructed, pleasing-to-the-eye artwork. I wasn’t trying to say that they were the only art posters (sorry for my wording!). Continuing, now that AI is in the arena, “technically-decent” art is no longer the lower bound for pleasurable-to-see – now, viewers are more partial to knowing that a human was vulnerable when they expressed themselves with art.

            It’s an intensification of internet-ugly aesthetic, which Douglas (2014) called "an imposition of messy humanity upon an online world of smooth gradients, blemish correcting Photoshop, and AutoCorrect” (p. 314). Now, online, handmaking art at all is a declaration of humanity, because you could corporately fake something full-colored and intricate, but arguably soulless, with lower effort.

            Of course, I’ll try to take it from your perspective. I’ve seen really bad human art (I like art!), and I’ve seen less-artifacted AI art (have you ever seen Even_Adder’s generations on lemmy.dbzer0? they don’t have the overshading issue at all). Of course, some may disagree that the latter is art (is art only human expression?), but supposing I do consider the latter art, my point still stands – viewers are more on the lookout for genuineness now.

            Happy to see what you think!

            References

            • moreeni@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              3 hours ago

              Your initial wording made me mistakingly think that your point was in showing that AI made creations worse but before, when humans made it themselves.

              Now that I see your real point, I still cannot agree. Your arguing has a false premise of thinking that everyone wants genuine human expression everywhere and eye candy images are no longer enough. Yet proofs of that not being the case are right before your eyes - look at the amount of upvotes on this post. The ones posting comments like

              I’d rather see a crayon picture than AI slop.

              are a vocal minority. Most people see the good enough image generated by AI and pass on. They never bother to zoom in and look for the artifacts that it has. Most don’t have the time to look for the ChatGPT wording, they read the post diagonally for 10 seconds and move on with their life.

              The argument has its roots in the problem of our different social surroundings. Maybe your life is full of people who have the time and energy to enjoy art that is not just looking decent but has a meaning, a message to it. Mine has a lot of people who are overworked and undereducated to play the game of being culturally superior, to look for humane expressions.

              Sometimes, technologically decent is enough. For some people, a simple eye candy that your view for a short period of time is enough to improve their mood during a break. It does not erase the point of high art. It does not threaten it. Thus, I find people that come barking at every AI-generated piece of imagery or text or whatever, claiming that posting this is stealing from others, that such posts serve zero purpose, that it’s better to be shown something poorly drawn with crayon, ridiculous and pitiful.

      • istdaslol@feddit.org
        link
        fedilink
        arrow-up
        17
        arrow-down
        3
        ·
        1 day ago

        It’s just a sign of carelessness, a shitty drawing would mean more as OP would have put effort into it and willingness to learn a skill. And it’s not just the drawing, the text is AI as well, so OP just wants some free internet points with as little effort

      • Kena@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        1 day ago

        So because they don’t have the skills they should steal and claim it as their own?

  • istdaslol@feddit.org
    link
    fedilink
    arrow-up
    80
    arrow-down
    3
    ·
    1 day ago

    Finally AI slop. Linux is now a fully grown corpo and therefore the year of Linux desktop is here

    • pmk@lemmy.sdf.org
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      In these corporate times we can stay free, share the code, and help our neighbors. Together we can share the joyous spirit of friendship, hacking, and arguing endlessly over which distro is best. In conclusion, Linux provides us with many good things, and should be celebrated.

    • HouseWolf@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      1 day ago

      TempleOS is the only OS corpos won’t touch, It’s protected by a holy shield!

      • asudox@discuss.tchncs.de
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        1 day ago

        King Terry the Terrible is watching everyone that uses TempleOS and will execute anyone who misuses it with an A10 gun, the fist of God.

  • generaledelsud@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    edit-2
    1 day ago

    I don’t know why everyone is saying it reads weird or its AI slop. To me it seemed pretty normal while reading, but I guess I’m so used to everything being written by AI nowadays I didn’t notice at first. How do you all spot it usually?

    • fool@programming.dev
      link
      fedilink
      arrow-up
      37
      ·
      edit-2
      1 day ago

      AI structure can be pretty obvious if you know which English weapons it loves to spam. Let’s walk it through (sorry for the wall of text lmfao):

      I skip the image because the chimney mistake and overdone shading is obvious

      1. Corporate style.
        • “In these unpredictable and often challenging times” – This is very corporate. How many messages have you seen like this during the pandemic? Buuut just because it’s soulless doesn’t mean it’s AI, but I wouldn’t expect it from a community of this archetype. (ai suspicion +1)
      2. Tricolons, especially ascending. (source)
        • This is something ChatGPT loves. Essentially, there are three “things” in a sentence, sometimes clauses. Sometimes each one is larger than the last (ascending), e.g. “I honed my skills in research, collaboration, and problem-solving.” And it appears a lot even in this short snippet
        • “…you can step back, breathe, and find some calm amidst the chaos”. The third element is longer. Ascension spotted. (ai suspicion +2)
        • “May your days be filled with joy, your systems stay secure, and your kernels remain stable.” Elements are successively syllabically longer. Ascension spotted. (ai suspicion +2)
        • “Take the opportunity to reconnect, reflect, and perhaps even find inspiration for the year ahead.” Third element is longer. Ascension spotted. How funny – three tricolons! Three three three three (ai suspicion +2)
      3. Obsession with superficial positivity.
        • ChatGPT, even when making stories about evil, is very partial to love, friendship, joy, making up, peace, tranquility, (pseudo) “unconventional” friendship. Excessive meaningless positivity is an archetype too, though ChatGPT’s factgivings are usually neutral-positive.
        • “more important than ever to pause and share heartfelt wishes” Share wishes. Would a human on c/linux say something like that without elaborating further about wishing for something, perhaps death to Windows users? (ai suspicion +1)
        • “moment of peace” “find some calm” “positivity” “open-source spirit” but they never talk deeper, again. (ai suspicion +1)

      So yeah this is at least 90% OpenAI. Too fuckin’ bad.

      • BCsven@lemmy.ca
        link
        fedilink
        arrow-up
        9
        ·
        1 day ago

        I will have to stop manually typing ascending tricons. LOL. I have used them often in correspondence and documents. It was a technique taught in English class.

        • fool@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          Only if there’s too many is it a worry. I use it now and then bc I LOVE things in threes (I’m not Ben Affleck I swear), but…

          in the above, the tricolon bonanza is insane – how can you fit that many in such a short text?

          You probably don’t need to cut down :)

        • fool@programming.dev
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          16 hours ago

          Sorry for the wall of text again c:

          (CLICK HERE FOR BIG WALL)

          AI text as a whole is usually structured, neutral-positive to positive shallowness. It’s called slop because it’s easy to make a lot of substanceless, nutrientless goo. One common structure is

          Introduction

          Since the dawn of time, ethics has been important.

          AI Structure: Hidden Secrets Revealed

          1. Being considerate: Being considerate can help relationships.
          2. This structure: is untrustworthy. Be suspicious when you see it.
          3. Lots of broad statements: that don’t say anything—often with em-dashes.

          Conclusion

          In conclusion, while ethics can be hard, it is important to follow your organizations guidelines. Remember, ethics isn’t just about safety, but about the human spirit.

          What do we spot? Sets of three, largely perfect/riskless formal grammar (grammar perfection is not inhuman – but a human might, say, take the informal risk of using lotsa parentheses (me…)), uncreative colon titles, SEO-style intros and conclusions, an odd corporate-style ethics hangup, em-dashes (the long —), and some of the stuff in that reddit link I mentioned are often giveaways.

          Here’s some examples in the wild:

          • Playing Dumb: How Arthur Schopenhauer Explains the Benefits of Feigned Ignorance. PeopleAndMedia. has useless headings and the colon structure I mentioned. There’s also phrases like “Let’s delve” and “unexpected advantage” – ChatGPT likes pretending to be unconventional and has specific diction tics like “Here’s to a bright future!” One interesting thing is that the article uses some block quotes and links – this is rare for AI.

          • Why is PHP Used. robots.net. This is from a “slop site”, one that is being overrun by AI articles. Don’t read the whole thing, it’s too long. Skim first. See how many paragraphs start with words like “additionally”, “moreover”, “furthermore”, like a grade school English lit student? Furthermore (lol), look at the reasonings used:

            The size of the PHP developer community is a testament to the language’s popularity and longevity.

            PHP boasts a large and vibrant developer community that plays a pivotal role in its continued success and widespread adoption.

            ChatGPT-esque vocabulary is used (this is something you unfortunately get a feel for), and the reasoning isn’t very committal. Instead of evaluating some specific event deeper, the article just lists technologies and says stuff like “PHP has comprehensive and well-maintained documentation, providing in-depth explanations, examples, and guides.” So what if there’s docs? Everyone has documentation. Name something PHP docs do better or worse. Look at this paragraph (SKIM IT, don’t read deeply):

            CodeIgniter is known for its simplicity and speed. It is a lightweight framework that prioritizes performance and efficiency. CodeIgniter’s small footprint makes it suitable for small to medium-sized projects where speed is crucial. It provides essential features and a straightforward structure that allows developers to build applications quickly and efficiently.

            It doesn’t actually SAY ANYTHING despite its length. The paragraph can be compressed to: “CodeIgniter has a light footprint”. It doesn’t even say whether we’re talking about comparative speed, memory usage, or startup time. It’s like they paid someone (openAI) to pad word count on the ensmallening I mentioned.

          Before reading something, check the date. If it’s after 2020, skims to be too long and not very deep, and has too many GPT tics (tricolons, vocab like “tapestry/delve”, the SEO shit structure), then it’s AI slop. Some readers actively avoid post-2020 articles but I can’t relate.

          edit: clarified that perfect grammar is humanly doable, but GPT-style riskless formal grammar is still distinct from grammatical human text

          • Blisterexe@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            15 hours ago

            the term “riskless grammar” perfectly puts into words how i felt about chatgpt’s texts, every human-written text has something “wrong” with it grammar-wise, except maybe example essays by english teachers.

            As an example, my previous paragraph has a lowercase I, too many commas, sentences compressed by using hyphens where they probably shouldnt go and probably some other stuff i missed.

            But it still read well, at least i hope.

            Most authors write their sentences their own way, and in my opinion, that’s what makes reading their books interesting. Perfect grammar is boring and no fun to read.

            as a fun experiment, i asked chatgpt to rewrite my first paragraph:

            “The phrase “riskless grammar” accurately captures my impression of ChatGPT’s texts. Unlike human-written content, which often contains grammatical imperfections—except perhaps for example essays by English teachers—ChatGPT’s writing maintains a level of precision and correctness.”

            Kind of changed the meaning to be self-complimenting, which is funny.

            edit: Normally I would have rewritten parts of this comment to make my point more clearly and be better to read, but i wanted to keep my first draft to make my point a bit better.

            • fool@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              14 hours ago

              Ty for feedback :>

              Your paragraph read well. I definitely agree – grammar with risks, outside of hyper-formal sitches, is just stylized diction. ChatGPT could scarcely come up with an e.e. cummings poem (just tested now, it never gets the style about right), nor dare to abuse parentheses, nor remove cruft for conciseness (e.g. to start a sentence with “Kind of changed” instead of “This kind of changes” for compression (woot)). It’s a “wrong” but not quite “wrong”, and I’m glad that “riskless” manages to carry that feeling

              And I edit a lot too :) it’s the “post-email-send clarity” effect

          • apostrofail@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            16 hours ago

            Errors can give away that a human typed something, but knowing proper grammar, spelling, and syntax of English is totally neutral—if not to be somewhat expected from a native speaker/typer with a lifetime to learn the language they speak (especially if we consider how many Anglophones are monolingual + educated + have access to technology like spell check meaning there is little excuse for not having English mastery).

            In my education, I got a public apology from a teacher letting the class know they tried to dig up proof of plagiarism in my persuasive papers, but for the first time proved themself incorrect on a plagiarism hunch. Humans are capable of writing well.

            • fool@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              16 hours ago

              edit: updated accordingly for clarity

              Ah, I mean proper grammar as in formal, largely riskless grammar. For example, AI wouldn’t connect

              monolingual + educated + have access to technology

              with pluses, like a human would.

              Not sure how I’d phrase that though. Maybe “perfect, risklessly formal grammar” as I just tried to call it? (i.e. if AI trainers consider using +‘es a “risk”, as opposed to staying formal and spick n’ span clean).

              Perfect grammar is humanly possible but there is some scrutiny that can be applied to GPT-style grammar, especially in the context of the casually-toned web (where 100%ed grammar isn’t strictly necessary!).

    • VerilyFemme@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      Text may not be AI, image definitely is. Usually everything in AI images glows slightly, like here. And the placement of Tux in the sky has no rhyme or reason.