autumnstwilight 6 hours ago

>>> Here's my question: why did the files that you submitted name Mark Shinwell as the author?

>>> Beats me. AI decided to do so and I didn't question it.

Really sums the whole thing up...

  • lambda_foo 2 hours ago

    Pretty much. I guess it’s open source but it’s not in the spirit of open source contribution.

    Plus it puts the burden of reviewing the AI slop onto the project maintainers and the future maintenance is not the submitters problem. So you’ve generated lots of code using AI, nice work that’s faster for you but slower for everyone else around you.

    • skeledrew an hour ago

      Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.

wilg 6 hours ago

Incredibly, everyone in this situation seems to have acted reasonably and normally and the situation was handled.

bravetraveler 8 hours ago

"Challenge me on this" while meaning "endure the machine, actually"

I guess the proponents are right. We'll use LLMs one way or another, after all. They'll become one.

bsder 6 hours ago

Can we please go back to "You have to make an account on our server to contribute or pull from the git?"

One of the biggest problems is the fact that the public nature of Github means that fixes are worth "Faux Internet Points" and a bunch of doofuses at companies like Google made "social contribution" part of the dumbass employee evaluation process.

Forcing a person to sign up would at least stop people who need "Faux Internet Points" from doing a drive-by.

djoldman 7 hours ago

Maintainers and repo owners will get where they want to go the fastest by not referring to what/who "generated" code in a PR.

Discussions about AI/LLM code being a problem solely because AI/LLM is not generally a productive conversation.

Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.

Additionally, if there isn't a code of conduct, AI policy, or, perhaps most importantly, a policy on how to submit PRs and which are acceptable, it's a huge weakness in a project.

In this case, clearly some feathers were ruffled but cool heads prevailed. Well done in the end..

  • rogerrogerr 6 hours ago

    AI/LLMs are a problem because they create plausible looking code that can pass any review I have time to do, but doesn’t have a brain behind it that can be accountable for the code later.

    As a maintainer, it used to be I could merge code that “looked good”, and if it did something subtly goofy later I could look in the blame, ping the guy who wrote it, and get a “oh yeah, I did that to flobberate the bazzle. Didn’t think about when the bazzle comes from the shintlerator and is already flobbed” response.

    People who wrote plausible looking code were usually decent software people.

    Now, I would get “You’re absolutely right! I implemented this incorrectly. Here’s a completely different set of changes I should have sent instead. Hope this helps!”

    • chii 3 hours ago

      > doesn’t have a brain behind it that can be accountable for the code later.

      the submitter could also bail just as easily. Having an AI make the PR or not makes zero difference for this accountability. Ultimately, the maintainer pressing the merge button is accountable.

      What else would your value be as a maintainer, if all you did was a surface look, press merge, then find blame later when shit hits the fan?

      • ares623 30 minutes ago

        If I had a magic wand I would wish for 2 parallel open source communities diverging from today.

        One path continues on the track it has always been on, human written and maintained.

        The other is fully on the AI track. Massive PRs with reviewers rubber stamping them.

        I’d love to see which track comes out ahead.

        Edit: in fact, perhaps there are open source projects already fully embracing AI authored contributions?

      • rogerrogerr 2 hours ago

        I don’t accept giant contributions from people who don’t have track records of sticking around. It’s faster for me to write something myself than review huge quantities of outsider code as a zero-trust artifact.

  • snickerbockers 6 hours ago

    I don't suppose you saw the post where OP asked claude to explain why this patch was not plagiarized? It's pretty damning.

    • lambda_foo 2 hours ago

      Why have the OP in the loop at all if he’s just sending prompts to AI? Surely it’s a wonderful piece of performance art.