PP100 is a really broad metric- covers everything from defects to failures, to ignorance and misunderstandings.
For example, tesla’s often rank low in these studies because new owners sometimes have to make dealership trips to learn about operation.
Still holds water to me, though- problems with UX and things-not-being-intuitive is absolutely something that can rise to the level of very serious problem, and one thing this study does, is provide a barometer of how brands are actually doing in terms of quality, in production, on the year.
So Tesla owners are returning to the dealer in the third YEAR of ownership to find out how things work!? This is the Vehicle Dependability Study. Not Initial Quality.
You have to pay JD to be even considered to be on their lists. that’s enough of a conflict of interest for me. and 99% of the time there awards are stupid. Like initial quality, wtf is that my car didn’t explode in 3 months but fell apart in 6 and it won an initial quality award???
They don’t necessarily act in bad faith, but they’re not terribly transparent and their methodology is flawed.
They straight-up say:
The infotainment category continues to be the most problematic with an average of 49.9 PP100
But they don’t break it out on each brand.
So the biggest contributor to these numbers is something that’s more minor nuisance than an actual serious problem like engine/transmission issues, but they don’t provide a breakdown for each car of what amount comes from minor complaints like this versus actual major issues that could leave you stranded or pose a safety hazard. The omission of thay data on a brand-by-brand basis basically makes these numbers useless and hence something that I ignore.
You have to dig into the results and ranking method but usually the most common problems are with infotainment. If users find them more difficult to use then many reported problems often drops the ranking.
It is completely opposite. CR is subscription based and therefore statistically invalid. Also they throw in their “expert” opinion. JD power is data mining actual owners whom don’t have to pay anything to give feedback.
JD Power offers “consulting services”. They come into your plant and look at your processes and make recommendations that will decrease your “problems per vehicles” that help it score better. Meantime anything that a customer doesn’t like its a “problem”.
I feel like this is based off donations to JD power
It is. Pure marketing
PP100 is a really broad metric- covers everything from defects to failures, to ignorance and misunderstandings.
For example, tesla’s often rank low in these studies because new owners sometimes have to make dealership trips to learn about operation.
Still holds water to me, though- problems with UX and things-not-being-intuitive is absolutely something that can rise to the level of very serious problem, and one thing this study does, is provide a barometer of how brands are actually doing in terms of quality, in production, on the year.
Tesla has dealerships now?
Technically yes, they are dealerships.
So Tesla owners are returning to the dealer in the third YEAR of ownership to find out how things work!? This is the Vehicle Dependability Study. Not Initial Quality.
Ah yes, the most reliable buick is back on the charts….
People have tried numerous times to prove JD Power acts in bad faith and so far have come up short.
JD power is taken very seriously by OEMs like people will lose their jobs over bad rankings or surveys
You have to pay JD to be even considered to be on their lists. that’s enough of a conflict of interest for me. and 99% of the time there awards are stupid. Like initial quality, wtf is that my car didn’t explode in 3 months but fell apart in 6 and it won an initial quality award???
No, you have to pay them to get access to their data. And initial quality actually means something.
I bought a PT Cruiser because it was recommended by Consumer Reports. I never trusted them again.
But did it initially explode? No, it didn’t. 😏
/s
They don’t necessarily act in bad faith, but they’re not terribly transparent and their methodology is flawed.
They straight-up say:
But they don’t break it out on each brand.
So the biggest contributor to these numbers is something that’s more minor nuisance than an actual serious problem like engine/transmission issues, but they don’t provide a breakdown for each car of what amount comes from minor complaints like this versus actual major issues that could leave you stranded or pose a safety hazard. The omission of thay data on a brand-by-brand basis basically makes these numbers useless and hence something that I ignore.
You have to dig into the results and ranking method but usually the most common problems are with infotainment. If users find them more difficult to use then many reported problems often drops the ranking.
In that case GM should be at the top
If you look into JD Power, they don’t play that game.
If you suck enough, but pay to play, they’ll create a bullshit category for you :P
They get funding from the corporations “Buying the data”.
JD power’s doesn’t have empirical credibility. If you compare them against the standard, Consumer Reports, you can see many of the biases.
It is completely opposite. CR is subscription based and therefore statistically invalid. Also they throw in their “expert” opinion. JD power is data mining actual owners whom don’t have to pay anything to give feedback.
JD Power offers “consulting services”. They come into your plant and look at your processes and make recommendations that will decrease your “problems per vehicles” that help it score better. Meantime anything that a customer doesn’t like its a “problem”.