That’s not universal. For instance, last week I got help writing a bash script. But I hope they’re helping lots of you in lots of ways.

  • tee9000@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    5
    ·
    edit-2
    2 days ago

    How do you know if it doesnt benefit a student? If their work is exceptional, do you assume they didnt use an LLM? Or do you not see any good code anymore?

    • hemko@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      It replaces the work required to research and think about the problem. You know the part where you’d normally learn and understand the issue at hand

      • tee9000@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        2 days ago

        Im asking about this individuals experience as a ta, not for an opinion on llms.

    • HStone32@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      I mean, they don’t generally keep their use of chatgpt a secret. Not for now, anyway. Meanwhile, the people who do well in the class write their code in a way that clearly shows they read the documentation, and have made use of the headers we’ve written for them to use.

      In the end, does it matter? This isn’t a CS major, where you can just BS your way through all your classes and get a well paying career doing nothing but writing endpoints for some js framework. We’re trying to prepare them for when they’re writing their own architecture, their own compilers, their own OSses; things that have 0 docs for chatgpt to chew up at spit out, because they literally don’t exist yet.

      • tee9000@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Oh interesting that they wouldnt need or want to hide that. When i use it i interpret every line of code and decide if its appropriate. If that would be too time consuming then i wouldnt use an llm. I would never deviate from the assignment criterion or the material covered by deferring to some obscure methodology used by an llm.

        So i personally dont think its been bad for my education, but i did complete a lot of my education before llms were a thing.

        Dont you guys test the students in ways to punish the laziness? I know you are just a ta, but do you think the class could be better about that? Some classes ive taken are terribly quality and all but encouraged laziness, and other classes were perfactly capable of cutting through the bullshit.

        • HStone32@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 days ago

          Electrical Engineering really is a no-frills field; you either can do it, or you can’t. Our only testing methodology is this: if they know what they’re doing, they’ll pass and do well in the major. If they don’t know what they’re doing, they’ll fail and rethink their major.

          Knowing what they’re doing is the important part. If it were the case that genAI chatbots helped in that regard, then we’d allow them, but none of us have observed any improvement. rather they’re wasting time they could be using to progress in the assignment to instead struggle to incorporate poorly optimized nonsense code they don’t understand. I can’t tell you how many times I’ve had conversations like:

          “Why doesn’t this work?”

          “Well I see you’re trying to do X, but as you know, you can’t do X so long as Y is true, and it is.”

          “Oh, I didn’t know that. I’ll rewrite my prompt.”

          “Actually, there’s a neat little trick you can do in situations like these. I strongly suggest you look up the documentation for function Z. It’s an example of a useful approach you can take for problems like these in the future.”

          But then instead of looking it up, they just open their chatgpt tab and type “How to use function Z to do X when Y is true.”

          I suppose after enough trial and error, they might get the program to work. But what then? Nothing is learned. The computer is as much a mystery to them after as it was before. They don’t know how to recognize when Y is true. They don’t know why Y prevents X. They don’t understand why function Z is the best approach to solving the problem, nor could they implement it again in a different situation. Those are the things they need to know in order to be engineers. Those are the things we test for. The why. The why is what matters. Without the why, there can be no engineering. From all that we’ve seen thus far, genAI chatbots take that why away from them.

          If they manage to pass the class without learning those things, they will have a much, much harder time with the more advanced classes, and all the more so when they get to the classes where chatgpt is just plain incapable of helping them. And if even then, by some astronomical miracle they manage to graduate, what then? What will they have learned? What good is an engineer who can only follow pre-digested instructions instead of making something nobody else has?

          • tee9000@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            If you are making them aware they will fail by not reading the documentation, then its surprising they would continue to put that off. Using chatgpt is different than only being able to use chatgpt. Then again i was a kid once and kind of get it. Maybe banning it is the better option, as you say.

            I thought it was scary enough when instructors would do “locked down” timed tests with short/essay answers. I cant imagine students thinking they’d be fine using chatgpt for stuff theyll need to applicably demonstrate.

            I wonder if the drop out rate will increase for colleges due to stuff like this, or if students are majoring in more technical stuff more due to llm overconfidence.

            Thanks for your responses!