In person timed tests suffer from people under performing because of external circumstances. Also we should consider chatgpt as a tool that can be used like a calculator. If the answer to a test can be easily retrieved for a widely available tool, the test is only measuring performance that is no longer required. Where possible, ideally measuring performance is based on their skill during a larger time period regardless of the tools they might use. For example repeated in-person peer review sessions (without a specific time slot) could both improve once performance and generating evidence of performance over time while reducing effort from the staff
I would only say that a calculator always gives people the right answer. ChatGPT does not. People should not be using any of these current LLM tools to seek answers to things they don’t plan on verifying through some other source.
If students can cheat on writing papers, why don’t we stop using it as a learning metric? Why not use in-person, timed tests instead?
In person timed tests suffer from people under performing because of external circumstances. Also we should consider chatgpt as a tool that can be used like a calculator. If the answer to a test can be easily retrieved for a widely available tool, the test is only measuring performance that is no longer required. Where possible, ideally measuring performance is based on their skill during a larger time period regardless of the tools they might use. For example repeated in-person peer review sessions (without a specific time slot) could both improve once performance and generating evidence of performance over time while reducing effort from the staff
I would only say that a calculator always gives people the right answer. ChatGPT does not. People should not be using any of these current LLM tools to seek answers to things they don’t plan on verifying through some other source.
I fully agree!