As schools grapple with how to spot AI in students' work, the FTC cracks down on an AI content detector that promised 98% accuracy but was only right 53% of the time.
Yeah for sure, although you do see some common patterns in AI generated text because it tends to reuse same structure a lot. Like you often notice stuff like “A is not only B, but it’s actually C”, etc. Another tell comes from the small context size, so you end up with a bunch of independent statements that don’t necessarily connect to each other. With human writing you often have an idea introduced and then developed gradually towards some conclusion. But as a whole, I agree that these are just common tropes from regular human writing, and it’s pretty much impossible to definitively say something was written by a human or not.
Amusingly, image detection is kind of turning into arms race now with people figuring out techniques like adding noise perturbation that throws off the detectors.
Yeah for sure, although you do see some common patterns in AI generated text because it tends to reuse same structure a lot. Like you often notice stuff like “A is not only B, but it’s actually C”, etc. Another tell comes from the small context size, so you end up with a bunch of independent statements that don’t necessarily connect to each other. With human writing you often have an idea introduced and then developed gradually towards some conclusion. But as a whole, I agree that these are just common tropes from regular human writing, and it’s pretty much impossible to definitively say something was written by a human or not.
Amusingly, image detection is kind of turning into arms race now with people figuring out techniques like adding noise perturbation that throws off the detectors.