Using supervised fine-tuning (SFT) to introduce even a small amount of relevant data to the training set can often lead to strong improvements in this kind of “out of domain” model performance. But the researchers say that this kind of “patch” for various logical tasks “should not be mistaken for achieving true generalization. … Relying on SFT to fix every [out of domain] failure is an unsustainable and reactive strategy that fails to address the core issue: the model’s lack of abstract reasoning capability.”

Rather than showing the capability for generalized logical inference, these chain-of-thought models are “a sophisticated form of structured pattern matching” that “degrades significantly” when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate “fluent nonsense” creates “a false aura of dependability” that does not stand up to a careful audit.

As such, the researchers warn heavily against “equating [chain-of-thought]-style output with human thinking” especially in “high-stakes domains like medicine, finance, or legal analysis.” Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond “surface-level pattern recognition to exhibit deeper inferential competence,” they write.

  • interdimensionalmeme@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    I think of chain of thought as a self-prompting model
    I suspect in the future, chain-of-thought model will run
    a smaller tuned/dedicated chain-of-thought submodel just for the chain-of-thought tokens

    The point of this is that, most users aren’t very good at
    prompting, they just don’t have the feel for it

    Personally I get worse results, way less what I wanted,
    when CoT is enabled, I’m very annoyed that now
    the “chatgpt classic” model selector just decides to use CoT
    whenever it wants, I should be the one to decide that
    and I want it off almost all of the time !!