What about the 5500x3D. There was chains of that, but it’s all gone quiet.
What about the 5500x3D. There was chains of that, but it’s all gone quiet.
The allowed limit is 4,800, so the RTX 4090 is about 10% “too powerful.”
…
but Nvidia will likely build in some wiggle room — to ensure overclocking as an example doesn’t become a problem. Assume a clock speed of 2.7 GHz and we get a maximum number of SMs of 108.
Like what is preventing them from just shipping 4090 cards downclocked to 2.4 Ghz, and then letting China figure out how to flash a 2.7Ghz BIOS onto the cards? I guess just Nvidia just doesn’t want to get in trouble with the US government?
The codename of Nvidia’s post-Blackwell GPU architecture could be Vera Rubin
So not next gen, but “next-next-gen”.
I’m sure a huge amount of people are. Had I bought an 8700k instead of an 8600k, I’d still be on 8th gen. That CPU is still in part with the consoles.
You can replace it with a 10 year old CPU from eBay. Just don’t, unless it’s like $20. We’re at DDR5 and you’re on DDR3 RAM.
So Smooth Sync is just not working in most games? Like what if you tried it in something totally unexpected like The Witcher 1 or 2? Something old, or something brand new? Is it a white list where they select which games to enable it for? Or a black list where they disable it for certain games exhibiting problems?
RDNA1 and 2 were pretty successful. Vega was very successful in APUs, just didn’t scale well for gaming, but was still successful for data center. You can’t hit them all, especially when you have a fraction of the budget that your competition has.
Also, he ran graphic divisions, not a Walmart. People don’t fail upwards in these industries at these levels. When they fail upwards working in some other industries, they fail to middle management. Somewhere you’re not in the spotlight, and out of the public’s eye, but don’t get to make final decisions. Somewhere to push you out of the way. Leading one of less than a handful graphics divisions in the world is not where you land.
It’s about 10% slower than a Ryzen 5600x in games.
“Why? I am still learning, but my observations so far: the ‘purpose’ of purpose-built silicon is not stable. AI is not as static as some people imagined and trivialize [like] ‘it is just a bunch of matrix multiplies’.”
But it is stable in a lot of cases, is it not? I mean if you’re training a system for autonomous driving, or a training a system for imagine generation, it seems pretty stable. But for gaming it certainly needs flexibility. If we want to add a half a dozen features to games that rely on ML, it seems you need a flexible system.
That does remind me, of how Nvidia abandoned Ampere and Turing when it comes to frame generation because they claim the optical flow hardware is not strong enough. What exactly is “Optical Flow”? Is it a separate type of machine learning hardware? Or is it not related to ML at all?
It mentions Olive. I don’t know what that is, but it’s suggesting it could cause AMD to catch back up. Is that true? Or is it more likely going to get them an extra 10% performance instead of the extra 110% they need to catch up?
Look at Hardware Unboxed on YouTube. They just made a video about these boards.
So is that Battlemage or not? Are they just sticking to Arc for laptop and APUs because it’s more suited in some way?
I just don’t see how any 12% RAM speed increase can cause more than a 12% gain. Bottlenecks shouldn’t have this kind of effect.
A 33% speed increase for a 12% memory bump makes no sense at all. Typically a 12% OC like your describing would not result in more than a 6% increase, and on average we’re likely talking less than 4%. Hell, even going to DDR5 6000 should not get you a 33% performance pump. Typically 4000MT/s DDR4 has per lose timings, that makes it not much better than a 3600 kit with tight timings.
First of all, those are rumors, and given how the leaker doesn’t even know if it’s 128bit or 192bit this late in the game when it’s been in development for 3 years and 10 months from release, it means the leaks are pretty much completely made up. RedGamingTech has a pretty bad leak accuracy record.
That being said, if it’s targeting 7900xt to 7900xtx performance, and it’s using GDDR7, then 192 but makes sense. It’s at about 7900xt memory bandwidth in total if you work out the math at 34 Gbps. Currently GDDR7 is aiming for 32 to 36 Gbps.
If you know how to read between the lines, it hints at something a lot of people already suspected and Digital Foundry talked about. Path Tracing will likely get added to UE5 with the help of Nvidia.
FSR3 isn’t performing like they want, and still seems like it’s months off. We know from already existing FSR3 implementations, how bad it interacts with vsync. And maybe there is anti-lag+ issues as well.
I feel like Intel GPUs have such a bad reputation for drivers now, that even Battlemage is going to fumble out of the gate, even if it’s good value, and even if the drivers by some miracle are better than what AMD has. On one hand they seem to be working hard on Arc to fix things, but I’m just super sceptical they can keep this up with less then 5% market share. They really needed Arc to be a success and to give a good first impression.
What’s Nvidia’s plans with their ARM CPUs? I can’t imagine people will be using them for desktop anytime soon. Are they trying to get into tablets and chrome books?
Was the reason for firing ever given? I’ve only heard speculation. Sounds like it’ll just stay secret.
Was the reason for firing ever given? I’ve only heard speculation. Sounds like it’ll just stay secret.
Processors do wear out over time, but usually not this fast. Might be that undervolt was just barely stable at one point. And maybe even unstable in some conditions you never tested. Now even the tiniest amount of wear, has dropped it below the line.
Could also be RAM being defective. But that’s usually from factory, not wear in.