Afaik, gpu cores a very stripped down cpu cores that do basic math, but what sort of math can cpus do that gpus can’t

I’m talking, cpu core vs Tensor/Cuda cores.

  • klauskinski79@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Gpus may be Turing complete ( yes I just repeated that to annoy the correcting dude 😂). But the majority of normal workloads that do not require an insane amount of relatively simple parallel work would be excruciatingly slow on a gpu. The processing cores are small and slow ( but there are thousands of those).

    • it does way less computations per clock
    • it dossnt do branching very well which is the core ability of any cpu ( if then else in badly predictable ways)
    • it has a slower clock speed
    • it’s cache lines are really bad unless you do a simple data in compute data out processing

    Basically anything that is single threaded and moderately heavy in logic ( most OS core functionality and most application code) would be absolutely atrocious on a Cuda core. Also splitting out all parts that can be done on the Cuda cores would be a huge amount of work if your code is interspersed with the heavy stuff. That’s why most computation are on cpus they are just much better at pretty much anything without much tuning unless you have a huge number crunching for example matrix computation. But if you have a couple million data points and you want to do some simple mathematics on them oooh the Cuda cores murder any cpu.