Researchers found some LLMs create four times the amount of CO2 emissions than other models with comparable accuracy. Their findings allow users to make informe
“For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.”
i’d say that makes the average usage for personal reasons an non-issue.
just to set the record straight:
“For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.”
i’d say that makes the average usage for personal reasons an non-issue.