There’s going to be a crossover point where the cut over from gaming chips used for AI accelerators will be worse than what china can build domestically.
Given their track record and recent trajectory i’d place that point at 15 years after the death of Xi at the earliest.
Their economy is beyond FUBAR, the brain drain is extreme and the culture is counter productive for actual advances(996 is lunacy)
They can’t make a chip that’s 100th as advanced as a modern GPU which works correctly using equipment that’s no longer available to them after spending that went 95% into a corruption black hole.
These gaming cards are often crippled at both the software (fixable) and hardware level when it comes to compute capabilities. If the gaming card is 1/4th or 1/32 or 1/256 as good as an actual compute card, how long before it’s better to use something home built? At a certain point, even normal amd and Intel cpus will be faster.
This applies more to double precision floating point (useful for nuclear simulations), but the ML hardware on gaming cards is crippled compared to the compute cards regardless.
Just take a look at how bad their domestically produced cpu is and you might be less concerned.
Saying CPUs will eventually be faster makes it sound like you might not understand why they want GPUs in the first place. If the CPUs get that good don’t you think they will also be banned?
If you cut down a GPU enough, the CPU doesn’t become good, the GPU becomes bad.
Double precision fp (yes a different metric but the main one super computers are concerned about) performance of Nvidia 4080 is 761.52GFlops. Ryzen 9 7950x is about 256GFlops, so on that one metric, it’s not like the GPUs are orders of magnitude behind. At a certain point, using a cut down gaming card won’t make sense.
I don’t think you know what you’re talking about. 4090s are just as fast at training as an A100. Faster at inference in some cases. They just don’t have as much VRAM. (24gb for 4090 vs 40gb/80gb for A100)
There’s going to be a crossover point where the cut over from gaming chips used for AI accelerators will be worse than what china can build domestically.
Given their track record and recent trajectory i’d place that point at 15 years after the death of Xi at the earliest.
Their economy is beyond FUBAR, the brain drain is extreme and the culture is counter productive for actual advances(996 is lunacy)
They can’t make a chip that’s 100th as advanced as a modern GPU which works correctly using equipment that’s no longer available to them after spending that went 95% into a corruption black hole.
You made the pinks mad!
Of course, but they’ve been working on computer components for years now and are still way behind, so I don’t think it will happen very soon.
Yes that point is potentially 15-20 years from now which is an absolute eternity in terms of AI and most other technologies
These gaming cards are often crippled at both the software (fixable) and hardware level when it comes to compute capabilities. If the gaming card is 1/4th or 1/32 or 1/256 as good as an actual compute card, how long before it’s better to use something home built? At a certain point, even normal amd and Intel cpus will be faster.
This applies more to double precision floating point (useful for nuclear simulations), but the ML hardware on gaming cards is crippled compared to the compute cards regardless.
Just take a look at how bad their domestically produced cpu is and you might be less concerned.
Saying CPUs will eventually be faster makes it sound like you might not understand why they want GPUs in the first place. If the CPUs get that good don’t you think they will also be banned?
If you cut down a GPU enough, the CPU doesn’t become good, the GPU becomes bad.
Double precision fp (yes a different metric but the main one super computers are concerned about) performance of Nvidia 4080 is 761.52GFlops. Ryzen 9 7950x is about 256GFlops, so on that one metric, it’s not like the GPUs are orders of magnitude behind. At a certain point, using a cut down gaming card won’t make sense.
I don’t think you know what you’re talking about. 4090s are just as fast at training as an A100. Faster at inference in some cases. They just don’t have as much VRAM. (24gb for 4090 vs 40gb/80gb for A100)
Home built meaning domestically in China, not in a home.