• FlowVoid@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    3 months ago

    Presidential elections are tricky because there is only one prediction.

    Suppose your model says Trump has a 28% chance of winning in 2024, and mine says Trump has a 72% chance of winning in 2024.

    There will only be one 2024 election. And suppose Trump loses it.

    If that outcome doesn’t tell us anything about the relative strength of our models, then what’s the point of using a model at all? You might as well write a single line of code that spits out “50% Trump”, it is equally useful.

    The point of a model is to make a testable prediction. When the TV predicts a 25% chance of rain, that means that it will rain on one fourth of the days that they make such a prediction. It doesn’t have to rain every time.

    But Silver only makes a 2016 prediction once, and then he makes a new model for the next election. So he has exactly one chance to get it right.

    • MonkRome@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      His model has always been closer state to state, election to election than anyone else’s, which is why people use his models. He is basically using the same model and tweaking it each time, you make it sound like he’s starting over from scratch. When Trump won, none of the prediction models were predicting he would win, but his at least showed a fairly reasonable chance he could. His competitors were forecasting a much more likely Hillary win while he was showing that trump would win basically 3 out of 10 times. In terms of probability that’s not a blowout prediction. His model was working better than competitors. Additionally, he basically predicted the battleground states within a half percentage iirc, that happened to be the difference between a win/loss in some states.

      So he has exactly one chance to get it right.

      You’re saying it hitting one of those 3 of 10 is “getting it wrong”, that’s the problem with your understanding of probability. By saying that you’re showing that you don’t actually internalize the purpose of a predictive model forecast. It’s not a magic wand, it’s just a predictive tool. That tool is useful if you understand what it’s really saying, instead of extrapolating something it absolutely is not saying. If something says something will happen 3 of 10 times, it happening is not evidence of an issue with the model. A flawless model with ideal inputs can still show a 3 of 10 chance and should hit in 30% of scenarios. Certainly because we have a limited number of elections it’s hard to prove the model, but considering he has come closer than competitors, it certainly seems he knows what he is doing.

      • FlowVoid@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 months ago

        First, we need to distinguish Silver’s state-by-state prediction with his “win probability”. The former was pretty unremarkable in 2016, and I think we can agree that like everyone else he incorrectly predicted WI, MI, and PA.

        However, his win probability is a different algorithm. It considers alternate scenarios, eg Trump wins Pennsylvania but loses Michigan. It somehow finds the probability of each scenario, and somehow calculates a total probability of winning. This does not correspond to one specific set of states that Silver thinks Trump will win. In 2016, it came up with a 28% probability of Trump winning.

        You say that’s not “getting it wrong”. In that case, what would count as “getting it wrong”? Are we just supposed to have blind faith that Silver’s probability calculation, and all its underlying assumptions, are correct? Because when the candidate with a higher win probability wins, that validates Silver’s model. And when that candidate loses, that “is not evidence of an issue with the model”. Heads I win, tails don’t count.

        If I built a model with different assumptions and came up with a 72% probability of Trump winning in 2016, that differs from Silver’s result. Does that mean that I “got it wrong”? If neither of us got it wrong, what does it mean that Trump’s probability of winning is simultaneously 28% and 72%?

        And if there is no way for us to tell, even in retrospect, whether 28% is wrong or 72% is wrong or both are wrong, if both are equally compatible with the reality of Trump winning, then why pay any attention to those numbers at all?

        • MonkRome@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          I think you’re missing the point of predictive modeling. It’s probability of separate outcomes is built in. This isn’t fortune telling, there is no crystal ball. Two predictive models can have different predictions and they both may have value. Just like separate meteorologists can have different forecasts, but predict accurately the same amount over time, all be it at different intervals. IIRC, the average meteorologist correctly predicts rain over 80% of the time. They are far over predicting by chance. But if you look at the forecast in more than one place you often get slightly different forecasts. They have different models and yet arrive at similar conclusions usually getting it mostly accurate. It’s the same with political forecasts, they are only as valuable as your understanding of predictive modeling. If you think they are intended to mirror reality flawlessly, you will be sorely disappointed. That doesn’t make the models “wrong”, it doesn’t make them “right” either. They are just models that usually predict a probable outcome.

          • FlowVoid@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            I don’t expect a model to be perfect. But it is certainly possible for one model to be better than another, for example one might think the Weather Channel forecast is less accurate than AccuWeather (at least for your region).

            Which, in turn, means that it is possible to decide when a forecast is more “right” or “wrong” than another, because what other basis would you have for judging which is better?