They’re closed source black boxes, not even the people who built them really know what’s happening under the hood.
That said, one can reasonably infer that an LLM-based system isn’t doing any form of visual processing at all… it’s just looking at your HTML and CSS and flagging where it diverts from the statistical mean of all such structures in the training data (modulo some stochastic wandering and that it may, somehow, have mixed some measure of Rick Astley or Goatse into its multidimensional lookup table).
They’re closed source black boxes, not even the people who built them really know what’s happening under the hood.
That said, one can reasonably infer that an LLM-based system isn’t doing any form of visual processing at all… it’s just looking at your HTML and CSS and flagging where it diverts from the statistical mean of all such structures in the training data (modulo some stochastic wandering and that it may, somehow, have mixed some measure of Rick Astley or Goatse into its multidimensional lookup table).
> They’re closed source black boxes, not even the people who built them really know what’s happening under the hood.
Please explain
Have you tried asking one?
Claude gives what seems to be a reasonable answer.
https://www.perplexity.ai/search/how-do-ai-coding-assistants...
Thats not Claude…