I don't understand why the height and weight errors aren't 0 when they are known inputs? If I say how tall I am, why is the model estimating something else?
That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a good thing that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.
It takes more like 10 seconds. For a large range of height and weight inputs crossed with all option combinations, you could precompute ~10M measurements and return results basically instantly.
Interesting idea. Using a questionnaire as input for an MLP makes sense but the real challenge is designing questions that capture useful signal instead of noise. If that part is done well, the approach has a lot of potential.
I'm guessing the writing is AI-assisted (there's no fluidity and it has some weirdly placed phrases) but I see they're in Poland and likely not English-language first?
ai;dr
MLP trained on 8 questions achieves ~0.3cm height error, ~0.3kg weight error, and ~3-4cm for bust/waist/hips measurements.
https://www.mdpi.com/1424-8220/22/5/1885 + some hacking => "we want to productize this"
> ai;dr
Haven't seen that one yet. I like it.
I don't understand why the height and weight errors aren't 0 when they are known inputs? If I say how tall I am, why is the model estimating something else?
That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a good thing that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.
from the title, i thought that will be akinator that produce you some images by image-v2
Tangential, but does anyone else keep reading "MLP" as "my little pony".
AI or not, I liked this bit:
> Averages lie about the tails, and a person who gets a 15 cm bust error doesn’t care that the mean is 4 cm.
A variation of that sentence should be mandatory in every scientific paper.
It has that kind of feel as if it's made in codex.
Well sorry no, because already the torso to leg length ratio is covered by none of their question. (and yes, they list it as a limitation)
It takes more like 10 seconds. For a large range of height and weight inputs crossed with all option combinations, you could precompute ~10M measurements and return results basically instantly.
Interesting idea. Using a questionnaire as input for an MLP makes sense but the real challenge is designing questions that capture useful signal instead of noise. If that part is done well, the approach has a lot of potential.
How big are the pockets and is it sex determined?
This is the best UI/UX article I've read this year. If the authors are around, I extend them my dearest congratulations ^^.
Like ... who/why would downvote this?
This is definitely manipulated.
I'm guessing the writing is AI-assisted (there's no fluidity and it has some weirdly placed phrases) but I see they're in Poland and likely not English-language first?