• tal
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    edit-2
    1 month ago

    Apple’s study proves that LLM-based AI models are flawed because they cannot reason

    This really isn’t a good title, I think. It was understood that LLM-based models don’t reason, not on their own.

    A better one would be that researchers at Apple proposed a metric that better accounts for reasoning capability, a better sort of “score” for an AI’s capability.