• alcasa@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    All this talk of elite makes the article so annoying to read and makes it difficult to take seriously…

      • aport@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Just whipped up a slack bot to override the pipelines and automatically merge your pull requests

      • fubo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        my code is so 31337 u cant even code review it, it’s classified “secret high intellectual technology” code.

    • Prefix@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I kind of agree. Hard to take too seriously with that kind of verbiage.

  • Corbin@programming.dev
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Hi! Please don’t link anything from this subdomain again. It was considered a plague back on Reddit, and this sort of content-free post shouldn’t be encouraged here either.

  • muhanga@programming.dev
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    1 year ago

    This really devolves into “good teams can deploy daily, can raise a small PRs and have small number of rework”. And this is like… thank you, but it is obvious. If team is able to do this things constantly it is probably a good team.

    DORA says that if your team is able to do same pattern (as they show) it will be “elite/good” team. This really smell like a cargo cult. And managers are already using DORA metrics as good/bad teams metric.

    This is clear Goodhart’s Law case: "“When a measure becomes a target, it ceases to be a good measure”. So either DORA knowingly did nothing to protect against metric gaming or they didn’t considered impact they will make. Neither of those is a good in my opinion.

    So yeah I don’t like DORA in it current iteration.

    • dandi8@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      IIRC from the original report, the claim here is that even “gaming” these metrics leads to the desired result, as you can’t game these metrics without actually improving your processes. I tend to agree.

  • koreth@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    What metric did they use to determine what “top 10%” means? Because that’s the part of this that seems most ridiculous to me given how situation-dependent most engineering decisions are. To illustrate with an extreme example: is “daily+ deployment frequency” a sign of an amazing engineering org if the thing being deployed is updates to your heart monitor firmware?

    • muhanga@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Same problem with “top 10%”.

      “DORA guys” came to our org in the past. And sing a song of “all successfully teams do that to, so you should too”. One of the my question, that was left unanswered, was did they analyse negative scenarios to check if their suggestions actually works and add too the reducing cycle times and what not?

      And most of the time my cycle time is more depends on number of meetings I need to attend through day than on anything even remotely related to the coding.

      I understand what DORA tries to do, but what they achieve is just another cargo cult.

  • glad_cat@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    I don’t know what DORA metrics is and it seems to be yet another bullshit discipline to give a name to “common sense.” Yes, small commits are better to review and test, there is no need to create yet another “Agile” framework out of this with its blogs and certifications.