Good catch! I’ll have my write-ups on college arms out sometime in the next week, 10 days. Kade is a funky one. Unsure models will love him but he’s getting stellar results.
Have you ever thought about doing like 3 sentence blurbs at the bottom of your college SP write up with like the best stuff+ relievers. Like Shores, Tanner Franklin etc
It’s not a big deal, and I’m a big fan of Stuff+ in general, but Stuff+’s grade for Jose Soriano’s Curveball is pretty funky to me. Despite throwing a pretty similar curve to the one he had two years ago, which got a 120 grade, he’s got a 70 Stuff+ grade on the pitch this year. Personally, I like Stuff+ a lot, and it seems quite good at handling a lot of pitch types, but some of the curveball grades are really strange to me. Very small changes in the metrics of a curve can cause huge changes in the grade a curve gets, and unlike other pitch types, those big grade changes don’t seem to predict huge changes in results.
You make some good points here! I was chatting with somebody wwho orked on Eno's new model this year about some of the oddities I've found (Sugano's 4S and Max Meyer's bullet SL).
I still don't have the full picture of things, but my gut tells me that the model is somewhat matching the results of this year's offering compared to prior years. If the pitch performs well, it reflects that in the stuff grade. Unsure if true ... or even if that makes sense. I'm not saying it reflects it in the predictive sense, but more in the reflective sense, which I feel like loses some of the model's value/intent.
Now, that being said, one thing I was told is that models are only valuable when they provide a result that is different from expectation, otherwise it's just miming what you already think. A big confirmation bias loop. So that's something I've been thinking about.
I think the nature of these models is often that they're black box-y. Such that I'm unsure how valuable it is for me to have a model that I can't understand why the difference exists. Sure, it might make it marginally more predictive, but my purpose on the public side here (selfishly) is to *learn*.
My suspicion is that, for curveballs, Stuff+ doesn't do a great job of disentangling stuff and command--I think because of how it considers release angles and/or approach angles. On Soriano, for example, his Pitching+ on the curve is relatively consistent year-over-year, but this year he has a 130 Location+ that's carrying the pitch, despite the zone rate this year being below his career average on it.
A little bit ago I asked you about Justin Slaten's mediocre Stuff+ grade on his curveball, and while I'm not super confident, I think the reason has something to do with his lack of command on the pitch.
Did you see Kade Anderson pull the opposite Paul Skenes? Extreme 1B side of the rubber against lefties, 3B side against righties
Good catch! I’ll have my write-ups on college arms out sometime in the next week, 10 days. Kade is a funky one. Unsure models will love him but he’s getting stellar results.
And putting out a YouTube video as well
Have you ever thought about doing like 3 sentence blurbs at the bottom of your college SP write up with like the best stuff+ relievers. Like Shores, Tanner Franklin etc
Man that’s a great adjustment from Bassitt. Something was clearly wrong last year and it looks like he’s fixed it whether mechanical or injury
It’s not a big deal, and I’m a big fan of Stuff+ in general, but Stuff+’s grade for Jose Soriano’s Curveball is pretty funky to me. Despite throwing a pretty similar curve to the one he had two years ago, which got a 120 grade, he’s got a 70 Stuff+ grade on the pitch this year. Personally, I like Stuff+ a lot, and it seems quite good at handling a lot of pitch types, but some of the curveball grades are really strange to me. Very small changes in the metrics of a curve can cause huge changes in the grade a curve gets, and unlike other pitch types, those big grade changes don’t seem to predict huge changes in results.
You make some good points here! I was chatting with somebody wwho orked on Eno's new model this year about some of the oddities I've found (Sugano's 4S and Max Meyer's bullet SL).
I still don't have the full picture of things, but my gut tells me that the model is somewhat matching the results of this year's offering compared to prior years. If the pitch performs well, it reflects that in the stuff grade. Unsure if true ... or even if that makes sense. I'm not saying it reflects it in the predictive sense, but more in the reflective sense, which I feel like loses some of the model's value/intent.
Now, that being said, one thing I was told is that models are only valuable when they provide a result that is different from expectation, otherwise it's just miming what you already think. A big confirmation bias loop. So that's something I've been thinking about.
I think the nature of these models is often that they're black box-y. Such that I'm unsure how valuable it is for me to have a model that I can't understand why the difference exists. Sure, it might make it marginally more predictive, but my purpose on the public side here (selfishly) is to *learn*.
Alright, enough rambling haha
My suspicion is that, for curveballs, Stuff+ doesn't do a great job of disentangling stuff and command--I think because of how it considers release angles and/or approach angles. On Soriano, for example, his Pitching+ on the curve is relatively consistent year-over-year, but this year he has a 130 Location+ that's carrying the pitch, despite the zone rate this year being below his career average on it.
A little bit ago I asked you about Justin Slaten's mediocre Stuff+ grade on his curveball, and while I'm not super confident, I think the reason has something to do with his lack of command on the pitch.