Nate Silver and the Fetish of Data-Driven Journalism

“Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.”

“Never tell me the odds!” ―C-3PO and Han Solo

Yogi Berra famously threw the fat lady off her stage in 1973 when he said, “It ain’t over till it’s over.” With the rise of and reliance upon data-driven modeling of elections and sports we might just as well rephrase it as, “It’s over before it begins.” But we’d be wrong to do so.

Like most oddsmakers going into Super Bowl LI, Nate Silver’s, owned by ESPN, predicted the New England Patriots to win. Going into the half-time as the Falcons were up 28-3, the site gave the Patriots a less than 1 percent chance of winning. FiveThirtyEight tweeted: “That Patriots drive took another 5:07 off the clock and actually dropped their win probability from 1.1% to 0.5%.”

Of course we all know what happened next. In yet another brilliant statistical upset in a year of upsets, the Patriots defied all probability after the half. They scored 25 unanswered points, taking the Super Bowl into an historically uncharted overtime which they then proceeded to win—giving America, and the world at large, a clinic in determination, momentum, and the ability of human beings to surmount even the greatest of statistical odds.

It was a lesson in the value of risk taking and accomplishment; values that were once core elements in the American mythos but that increasingly have fallen out of favor in exchange for the perceived infallibility of data-driven models and analyses.

Since the mainstreaming of data punditry, exemplified by Nate Silver’s meteoric rise and FiveThirtyEight’s hallowed place in the culture, we’ve seen a cultural shift with regard to the use of statistics and data. Big Data, polling, and more specifically, Silver’s predictions, have become the equivalent of a mic-drop in any conversation about sports or politics. Throughout the election cycle, on TV shows and social media feeds across the country, his pronouncements were treated as sacrosanct papal bulls. His data-driven analysis, whether accurate or not, provided gravitas for those seeking a more commanding way to eviscerate opponents in debate. “Silver gives Hilary a x percent chance to win the election” became the trump card in any conversation.

We’d moved to a point where we seemingly were willing to assign data modeling more value than the possible variances, irrationality and risk-taking inherent in human decision-making. This happened during the Super Bowl just as it happened during the election. In both cases, statistical models were held up as unassailable predictors.

And in both instances, they were wrong.

For his part, no matter how certain Silver was of his model, he’d often hedge. In October 2016, under the headline “Clinton Probably Finished Off Trump Last Night,” Silver wrote: “I’m not sure I need to tell you this, but Hillary Clinton is probably going to be the next president. It’s just a question of what ‘probably’ means.” (emphasis added) He then spent the bulk of the article convincing us that Clinton would win, but at the end noted the possibility he could be mistaken. When results of the Republican primary, the Michigan Democratic primary, and the general election proved him very wrong, Silver’s postmortem explanations moved the goalposts, claiming event X or event Y was unprecedented, thus skewing the initial models. Even after the Super Bowl, in an attempt to make light of the situation, he tweeted: “At least the Falcons won the popular vote.” To which a user responded, “Nate, you don’t get to make election jokes.”

Silver also acknowledged in a lengthy post-election analysis that subjective best guesses and metrics are often baked into the stats when unprecedented things happen. By saying this Silver, admits that pure stats—facts, figures, polls, and data—might work for averages and as descriptors, but they cannot accurately adjust for extraordinary events and people. This was best summed up by David Morris, when he wrote about Silver’s failure to predict Trump’s victory in the Republican primary: “Unlikely events like the Trump nomination are, by their very nature, impossible to predict.”

The models, thus, don’t ever really predict the future. They are informed best guesses that describe how current events would likely play out if those events and the responses to them conformed to the past. The trouble with trusting the Oracle, however, is that when history occurs, it is often a break with the past.

Silver’s accuracy is not the issue here. Everyone get things wrong from time to time. It’s just that despite being fabulously wrong over and over again, and despite his admissions of fallibility, people still cling to his pronouncements as the ultimate argument from authority. This signals a more profound structural problem with the culture—one too eager to find quantifiable solutions to complex and often unquantifiable situations, especially when those quantifiable solutions comport to their views of the world as it should be.

It’s not Silver. He’s just the fetish for the phenomenon.

Jason Rhode, in Paste Magazine, opens his withering critique of Silver with a quotation from Federalist 55: “Nothing can be more fallacious than to found our political calculations on arithmetical principles.” And yet today many seem to believe that Silver is arithmetic made flesh, as such he’s an avatar of a cultural desire for statistical certainty in light of a constantly changing and often unpredictable world of human interaction and politics. He is Hermes bringing us the word of the gods. But sadly, we miss the point of hermeneutics, that discipline of critically assessing the nature of Hermes’ message.

Instead, invoking FiveThirtyEight seems to bestow upon the speaker of the Silverian incantations an air of both intellectual superiority and mathematical indifference. “Nate Silver predicts…” is akin to saying “Shut up idiot, what do you know? The numbers don’t lie, don’t doubt your betters!” But that appeal to Silver is really an appeal to the illusion of a fully predictable future.

Ultimately, an overreliance on Silver—and Big Data in general—is a quasi-religious attempt to bring order out of chaos, an almost fundamentalist approach that borders on number zealotry. It’s an attempt to overlook how little we know about what we imagine we can design.

The American zeitgeist until quite recently has been opposed to this view of human nature and events. From our movies which stress against-the-odds comebacks to our national mythos as the set of upstart colonies that managed to defeat the strongest empire on earth at the time, we have reveled in being exceptions to the rule. This thinking, in turn, has lead to a national character that stressed self-reliance and risk taking.

But now, with a large segment of the population and an even larger segment of our leaders all too happy to reduce human interaction to data points, we run the risk of becoming an increasingly risk averse and technocratic society where people value comfort over vision, ease over innovation and utility over passion.

Statistics are an integral component in decision making, but as the caveat during every infomercial tells us, “Past performance is not a guarantee of future results.” Ultimately, the Super Bowl, the election of 2016, and so much of history show us the problem with technocrats and those that would use the pronouncements of statisticians as some guaranteed proof of outcome. They can’t take into account human ingenuity, grit, and the ability to create hope and momentum in the face of time decay and defeat. Silver’s failures of late are so traumatic to those who would quote him as scripture because it upends their notion of a society and a human nature whose interactions can be easily reduced, predicted and thus controlled.

About Boris Zelkin

Russian-born Boris Zelkin is an Emmy Award-winning composer who has written the music to countless films, documentaries, television shows and major sporting events, including the Tucker Carlson show, Bill O'Reilly, "Gosnell," “FrackNation,” Citizen United’s “Rediscovering God in America II,” Roger Simon’s “Lies and Whispers,” the America's Cup, the Masters, the World Skating Championships, the U.S. Open, NASCAR, the Stanley Cup Championship, and the theme to ESPN’s NCAA championship coverage. Zelkin received his B.A. from Colgate University and earned his M.A. in religion from the University of Chicago Divinity School. He has written extensively on the culture for various online journals and was a major contributor to the recently released “Bond Forever,” a book about the James Bond franchise. He currently resides in Los Angeles but is always looking for a way out.

Support Free & Independent Journalism Your support helps protect our independence so that American Greatness can keep delivering top-quality, independent journalism that's free to everyone. Every contribution, however big or small, helps secure our future. If you can, please consider a recurring monthly donation.

Want news updates?

Sign up for our newsletter to stay up to date.

145 responses to “Nate Silver and the Fetish of Data-Driven Journalism”

  1. The Left has always been quite fond of appealing to what it imagines to be its own “historical inevitability”. But its arguments have more supposition than science to them.

    • Excellent point. But even there, Nate said the ‘blue wall’ was bullshit. And he has never been fond of the left’s ‘coalition of the ascendant’ theory.

      • Silver was wrong from the middle of 2015 right up til November 8 2016. Yes, he was less bullish on Hillary Clinton than others, be he still got it wrong. But as Zelkin points out, Silver isn’t the problem, the people who treat him as God, are.

      • Yes, a good friend believed in the “God of Nate Silver” over and over again, and confidently declared to me, “Trump will never be President.” I simply laughed and told him to remind me of Nate’s stats on Nov 8.

      • So you’re the female college grad who voted for trump. Good to meetcha!

    • Same goes for Globtarded Warming…………and the myth of it.

    • It’s arguments are like Ebenezer Scrooge supposes his ghostly visitors to be: a bit of undigested potato; more of gravy about them than the grave.

    • Yup, her people had already voted in advance so the election GOTV program just reminded Republicans to go and vote. In FL, anyway.

  2. Silver and his ilk exhibit hubris when they believe they have quantified what is inherently unquantifiable. His zealots who hang on his every word betray their ignorance of the subject by unthinkingly believing his predictions without understanding the basis for them or how they might be faulty.

    • There’s no hubris in saying something has a 28.6% chance of happening (as Silver predicted for Trump winning). It means exactly what it says, the problem is in the reader’s interpretation to mean that something is guaranteed to not happen just because it’s under some threshold that they believe in.

      • The hubris is in believing they have quantified the unquantifiable. Silver’s % chance for Trump to win (as was his Super Bowl win chance) is a meaningless number that has no basis in fact. They cannot know all potential outcomes and permutations because they are inherently unknowable. Putting numbers like that out there then having people use them as bases for arguments is a fair example of the danger of hubris and ignorance.

  3. All the above observations are true but surely they also are a comfort to Trump-supporters like me.

    I suspect that Donald Trump’s leading characteristic is that everyone always underestimates him.

    Someone who swims successfully against the Lunatic Mainstream tide is going to be assisted by Nate Silver and his type of thinking. President Trump will regularly catch his opponents bending. They will have their data-driven projections which they are peering over and relying upon, while he does something they and their data never thought of, let alone expected.

    (This happens continually in his interviews with hostile journalists in the Mainstream Media.)

  4. Let’s see, in the past 12 months:

    -Trump won the primaries, becoming the Republican nominee, and if the Democrats hadn’t rigged everything, Bernie Sanders might have well won the Democrat’s nomination.
    -The British voted for Brexit, and Brexit passed by a good majority.
    -The Cubs made an extraordinary comeback when they looked done for and won the World Series.
    -Trump won the election, taking states that the Democrats thought were infallibly theirs.
    -The Patriots mounted an impossible comeback.

    And that’s just what I can think of off the top of my head.

  5. Side Note: My favorite statistical dalliance was the article from Salon (I think) “proving” all Trump voters were racist. It was very scientific and referred to a survey done in early 2016. I made it through most of the article before my laughter was replaced by incredulity and nausea. At this point I feel sorry for people and their delusion … I am re-reading Jeremiah and see God repeatedly commenting on His people’s self-delusion, justifications, etc. at a community and societal level … same thing here. Case-in-point: Mr. Trump is xenophobic for making a few people wait in an airport, but Mr. Obama is alright for ordering the killing of a Muslim-American citizen in a drone strike with no due process. They both wanted to protect America. They both used their Constitutional and legal power. It’s amazing to watch. Doug

  6. Wait… Einstein was wrong when he said “God does not play dice with the universe…”??!!
    Well then, I guess I’ll have to go with Hawkins on this one: “God still has a few tricks up his sleeve”.
    Sorry Nate but glad you were wrong.

  7. There’s one very fundamental component of this story not well explained. Nate Silver himself repeatedly warned Trump was only one statistical error away from winning the election. He also maintained (many times) that there existed a high degree of uncertainty in this election and that a very high number of undecideds remained right up to the end. All these things things he wrote BEFORE the election. Along with everyone else, he missed Trump’s primary success. But no one could have known that so many stubborn candidates (Cruz, Kasich especially) would not get out until it was too late or that Marco Rubio would lose Florida to Trump (in the general where everyone comes out, Rubio had 7% more votes for the Senate than did Trump for the presidency). Last, Silver gave Trump a much higher chance of winning than any other aggregator.

    • But the point is, he was still wrong and everyone treated his stats as gospel.
      He hedged just to protect himself in the slight (in this case Yuge!) chance he was wrong.

      • Since he predicted a substantial likelihood of a Trump victory, you cannot say Silver was “wrong”. To claim this is to misunderstand the meaning of a probabilistic prediction.

        Regarding Isaiahdolan’s point, I agree that Silver’s political viewpoint is quite biased to the left, which is why I usually don’t read his articles. But his models are a separate story, are derived from data rather than opinion, and are generally good.

      • No, I looked at his predictions BEFORE the election, the ones I saw were pretty far off….

      • Right. He was waaay off all year. Not just one blip or a few misses. Almost ALL year. I think he was bullish on Trump for a week after one wikileaks – along with the national polls. But he was way off over and over. The Hilary was NEVER a 70-80% shoe in. That was delusion on top of illusion and nate was a carnival fortune teller.
        Proof that hustlers and thieves still run most of the media is Silver is still top of his trade..

      • So Nate aggregates polls. He does not conduct them. His aggregations are only as reliable as the polls that feed them.

      • Dumb HE WEIGHTS THEM not just aggregates. You need to look up his methodology.If he just aggregated he’d be RCP not a prognosticator.
        BTW his system worked ONCE and didn’t TWICE – Not quite science eh.

      • All polls, all aggregations are weighted. I’m very familiar with methodology. And RCP had Clinton +3.3 on Election Day, about the same as Nate. His system worked “once?” You understand there are baseball, football, basketball, and other models on FiveThirtyEight, right? If you think he has only forecasted 3 events, I think I understand your problem.

      • Not my question, you dodged it. You are familiar, huh? .I laid out specific ways in which sleight of Sharlatan Silvers did calcs in popular vote, then did a bait and switch into an electoral vote prediction, and this is your best answer? Bring “familiar” is like you being at an operating tabke, being asked to assist because you said you “are familiar with operations, or perhaps you work for NOAA, and “are famikiar” with replacing solid buoy data with bogus but higher ship data, and then “adjusting the data further upward” in 2015. Perhaps you “are familiar” with that new statistical technique.

        Kiss me quick, you fool!, Kate.

      • Wow. You done slipped a few cranial gears there, haven’t you?

      • Once agaib, he predicted a 70% chance of Clintin winning the ekecyion, not a 7% chance of Clinton winning the popular vote
        Nothing to do with weightings of popular vire, everything to do with making electoral college vote predictions for each symtate, then combining therme (read massive margins of error for this combination step) to predict the national electoral college vote. Two entirely different projects two entirely different mehodologies
        .You are still thinking popular vote, but Nate Sharlatan predicted a 70% probability of Clinton winnubg, which is the electiral college vote.

        Once againbad science wins, er, loses in radical aristocratic land.

        The deep South predicted they had an 80% chance of winning the Civil War. Same methodology, you guys are still using it.

      • What are you, the stats Nazi? The national forecast was a small part of what aggregators do. He had 50 state aggregations as well. Why comment on something when you obviously know nothing about?

      • His system re national elections.
        And what’s the point of saying “he just aggregates polls” if you are including his very important additional step of SCORING the polls. It’s a very active Nate – not a a passive one.

      • Nate”s model is proprietary, and he constantly makes adjustments to it.

        No publication. Just puts his finger in the sxale, er, makes weightings, when he feels like it.
        .You can predict the popular vote. You cannot translate that into a probability of winning the electoral vote. Even so, he should have assigned a confidence interval to his “probability of winning” , and that rate would have been at least plus or minus 30% No statistician I know makes a probability forecast statemen, in a model,already based in ubcertainties, without showing a confidence interval for his prediction

        Really bad science, like the NOAA garbage in climate data. And guess what, the NOAA has now conveniently lost their database. No replicability possible from other scientists.

        Oh by the way, no confidence intervals in the NOAA report either. What the hell are they teaching in math at universities these days?
        Guess what the most popular job market is now? Data scientist Ooops

      • Hogwash. I can say his model was fatally flawed, that he did a study of the popular vote, and slid it to the public as a “winning percentage” which means he used it to state a 70% chance of winning the Electoral College, when his model was only good for the popular vote.

        Hogwash is hogwash is hogwash.

        And he never revealed his model……BIAS ALERT.

      • He wasn’t wrong. He wasn’t making a prediction. He was assigning a probability that something will happen. If I say there is a 25% chance of flipping two consecutive heads, and you flip two consecutive heads, that doesn’t make my probability of that event occurring incorrect.

      • His forecast gave Trump had a 30% chance of winning. Trump won. What was he wrong about?

      • It gave trump a 30% in the final week. It was as low as <10 in the months before. Any normal person knows that Trump's chances didn't all of the sudden improve dramatically one week out.
        30% was so he could cover his a$$ just in case but still be propaganda for the dummies. These pollsters are not scientists or real statisticians doing refereed work. They are spokesman.

      • One, Nate was actually at 50/50 as well in prior months. Two, he’s not a pollster. Do you…have any remote idea of what we’re talking about here? It must be a frightening world when you have no education and no idea of how the world works.

      • Pollsters should have been prognosticators/seers – the ones you put stock in – but pollsters are in large part also cons. You’ve been beat up pretty good on this board so I wouldn’t mention “education” or the “real world”.

      • Lord, there’s another one. Doesn’t anyone have a high school diploma here?

      • Hey, are you for Trump, and a female college grad!

        Welcome, welcome. I actually gave you a long answer above, if you really are a n interested person. Even f you are a dumb bunny hard left radical, give it a read then use it to start a fire to warm your toes.

        Goodnight and God bless.

    • Insidiuous Pall: I followed the politically biased Silver and at times he saw Trump as a threat but most often he wrote him off. His little articles were written from the POV of a leftist Manhattanite to other like minded tribal peers. If Silver’s supposed saving grace is he gave Trump a better chance than most – well, as the old saying goes, close only counts with horseshoes and hand grenades.

      • Even though his models predicted Hillary as the winner, you could tell….he had a BAD feeling about this.

      • Political views have nothing to do with his model. The reason you don’t like him is probably that he is a liberal. That’s fine – I don’t like liberalism at all. But I can discern statistics from politics. Apparently not many around these parts can.

      • This has nothing to do with the fact that I don’t like his snarky POV. It has everything to do with reading his poll interpretations throughout the campaign and please don’t try to rely on one static poll as representative as to what he predicted along the way. I assume if you calculated his mean values you might agree.

      • Calculated what “mean values?” I’ve followed and commented on that site every day for years. As regards prez elections, he develops models for each state and one for the national vote. The models absorb polling data. From those models, probabilities are generated. He gave Trump roughly a 30% chance of winning. He won. This happens all the time. You folks don’t understand what it is that aggregators do. And by definition, the aggregations are only as accurate as the polling data that informs them. Blame the pollsters.

      • If you looked at the standard error of the mean for the polls, they actually said that the election results were not statistically significant….

      • Standard…what? Are you referring to standard deviation? The “mean for the polls?” Are you perhaps trying to sound as if you are fluent in statistics? There doesn’t seem to be a point.

      • If you were fluent in statistics you would know that the SEM is a measurement of confidence that informs one as to how “loose” any particular measurement is. So in this case, the poll would say, hillary would win with 52% +/- 4%. The SEM IS 4%, and means that the 52% measurement is accurate within 4% on either side. The number could be as low as 48% or as high as 56%. In this case it turned out to be on the low side….the actual number and the predicted number were not statistically different.

        The point is that the actual % of the vote hillary got WAS within the predicted SEM and was not actually incorrect. In general, people just don’t understand how measurement and stats really work….vis a vis what you just wrote!

      • Few Progs ever take math and certainly not stats, too busy with “save the gay whales” studies….

      • I think you mean, ‘e.g.’ what I wrote. “Vis a vis” means by way of, or face to face. So you’re talking about margin of error. Why not just say that? Of course, the polls were not wrong. As Nate noted, there was a huge number of undecideds that broke at the end. It’snot difficult to figure these were Republicans who did not like Trump but had no alternative when considering the Supreme Court.

      • No, it means, in regards to…..but how did Nate know the undecideds broke for Trump?

      • I know Silver is an exceptionally bright guy who knows numbers, trends and polls. But he is also a snarky liberal who gave Hillary a 71% chance of winning. He also failed in the Republican primary. When he is right (2008/2012) he deserves credit but when he is wrong he deserves honest criticism. Blame the pollsters because he is an aggregator is lame.

      • There is a distinction between the blog and the statistical models. One does not inform the other. I know; I do daily battle with lefties on that site. But the aggregations are based on the polls and are only as accurate as the polls that inform them. There was a huge number of undecideds right up to the end and on Election Day, millions of Republicans who did not like Trump but viewed the Democrats as a worse option, put him in power. So maybe the polls were not wrong and simply reflected these undecideds.

    • Hogwash. All people remember about Sharlatan Silver is his 70% to 80% Hillary Wins forecast.

      Sharlatan Silver never predicted the Electoral Vote, he pretended the popular vote was a proxy for the Electoral Vote.


      A Sharlatan is Nate Siulverman, king of the “NOAA-type” fake statistics. I heard he got $1 million from a private investor for this advice.

      • He pretended the popular vote was a proxy? Wha-? He had state aggregations and national aggregations. I get there’s no arguing with one who does not understand reason, but for the record, Nate Silver’s forecast gave Trump a 30% chance of winning while everyone else gave him less than five. Trump won. What’s the problem? Other than of course, that you don’t understand statistics. You’re sounding like a closed-minded leftist.

    • Election campaigns turn on personalities and events, not statistics. Had the Bureau chosen to seek an indictment for Mrs. Clinton – as it would have for any normal person – the race would have been over. Then, had New York’s answer to Oscar Meyer not reared his tousled head again, prompting a new investigation, who knows whether Trump would have won. There are also historical models that predicted a GOP win based on historical trends. In the end, things hinge on events and personalities, things none of us can predict. The world is by its very nature chaotic and with Mrs. Clinton and Mr. Trump as standard bearers, this election was especially so.

    • But this is junk science, not good statistics. Really what you are saying is “Trump may win, or he may not.” So what? My dog can give a better argument than that.

      • Your dog should go into politics. Do you have even a faint idea how probabilities work?

  8. The problem with Big Data is its inability to predict/asses/quantify variables. There’s a line in the movie ‘Contact’ with Jodie Foster based upon the book of Carl Sagan which describes big data perfectly. In it, the Director of the space mission gives a ‘ suicide pill’ to Jodie Foster before she parts on her presumed intergalactic journey. She asks why she would need to take this pill, wherein he replies…(paraphrasing) “We can think of a million reasons why you would want to take the pill, but it’s mainly for the reasons we can’t think of”.

  9. OK, as someone who reads 538 quite often, I think you are misremembering what was said during the election. The day before the election, 538 gave the odds as 70% Clinton, 30% Trump, and was pretty clear that a Trump election was a lot more possible than others thought was likely. The problem is less that you can’t trust statistics, it is that you have to actually understand what those figures mean.

    When you have a methodology that predicts something and something else happens, you should examine your methodology to see how it failed. But in this case, all in all, 538 is doing better at explaining the polls than the naive viewer who thought that the polls in different states were independent or didn’t understand uncertainty intervals.

    • You didn’t read the fine print Silver “if Clinton wins the popular vote by 2 points she would have an 80% chance of winning the electoral college.” Fail

      • ‘You didn’t read the fine print. Silver “if Clinton wins the popular vote by 2 points she would have an 80% chance of winning the electoral college.” Fail.’

        Why is that a fail? He said 80%, not 100%.

      • If you think “Trump has an 80% chance of losing” is not a terrible fail “prediction” then there is of course nothing else to be said.

      • NJ Sheppard – Thank god you are an old guy and not the future of our country – your Math skills suck. Please go online and check out what PROBABILITY is in Math.

      • LOL-straight to the ad hominem of course. Anyway, hopefully i live to be as old as Sanders. NB if a car dealer tells you the “probability of it being as lemon is 80%” go ahead and buy it.

      • Or a sports insider says that NE has an 80% of winning the superbowl – I bet 10k the same day.

  10. Given the variability of the electorate (ideology, age, location, personal benefit, economic status etc.), how can you expect a sampling of 2K people to be a valid representative sample of a population of 320KK. I received an engineering degree and appreciate the value of statistics in predicting the probability of outcomes for an event – when there is sufficient data to describe the process and range of outcomes. However, the bell curve is way too broad to be of precise use for predicting elections.

  11. The problem is not in data-driven journalism, but more in the the public’s lack of understanding of probability (and hence, what model results mean). A 1% chance does not mean that something will not happen, it just means that it frequently will not happen. Holding up a low probability event and saying “See! Models are useless!” is just as stupid as holding up a high probability event and saying “See! Always listen to the model!”. Just because you have an extremely low chance of winning the lottery doesn’t mean that you won’t. Similarly, just because vaccines are effective in 99% of the population doesn’t mean that you won’t have an adverse reaction.

    • If the public is not capable of understanding probability and assigning the correct interpretation to the numbers, then why should it be quoted to them? If 538 quotes a probability of HRC winning of 99% and conversely Trump has a 1% chance of winning, I think the public would correctly interpret that HRC has it in the bag (she would win 99 out of 100 elections). On the other hand, if they quote a probability of 60% for HRC winning, then I think the public is smart enough to know that she has a lead but it’s not in the bag. I have the impression that you’re defending the infallibility of statistical projections because people don’t realize that anything less then 100% means they can’t be wrong – brilliant defense.

      • Nate had 28.6% for Trump. And you’re making my point about not understanding probability. 1% or even less than 1% does not mean something will not happen! How does anyone win the lottery if the odds are 1 in 292 million? By your understanding of probability, nobody would ever win the lottery because the odds are so small.

      • Of course people understand that anything above 0% could happen. As well, they understood that Silver and others were saying the odds that Trump would win were low. And they went out and voted anyway because they knew he could win. People aren’t lottery balls.

      • Based on the “the polls are wrong” and “models are wrong” crowd, I would say that people do not understand that. A model can correctly compute that something has a small percent chance of happening, and yet when that actual event happens somehow it means the model is wrong?

      • As others pointed out here: then what ends do these polls and predictions and and models serve to the public? Seems the money’s being wasted. Just report that x may win but of course y might win instead.

      • The point is to better explain cause and effect and to see what factors are important to the end result. You know, the general reasons why we do any science.

      • The public benefits nothing from Silver’s work in elections. They just go out and vote. Silver and others need to sell their ‘product’ somewhere but it’s not to the voting public. He’s not doing scientia gratia scientiae.

      • I guess it depends on whether you feel that there is value in showing a better way/method than what existed before. Political punditry was far less sophisticated in 2008 compared to now and Silver is in part responsible for that. If you believe data science is useful across multiple fields, then his work has value.

      • ‘Data science’ as you call it can be oh so very useful, but it can also have little to no utility in specific areas. What was the purpose of this data science proffered during the 2016 election? How did it benefit the electorate is my particular question. I think we already know how NS can benefit (a different kind of ‘value’ than the ‘value’ you find in data science, I think; but I could be wrong).

      • “why should it be quoted”
        Follow the money -and the money that’s to be made.

    • Princeton Election Consortium gave Trump 1% chance of winning electoral college. So Trump (or equivalent) will win once every 400 years. I get it. Give me a break. That 99% was an embarrassment and everyone knows it.

      • I’ll go back to the lottery example. I can create a model that calculates the chance of you winning the Powerball. That model says your chances are 1/292,201,338. You go out and play the Powerball and win. Was the model wrong? The answer is of course not, the model is 100% accurate.

        The point is that just because a low percentage outcome occurs, it does not mean that a model is necessarily incorrect.

      • A problem with this analogy is that lots of people are playing the powerball, while there is only one presidential election (in a 4-year cycle), or one Superbowl (in a years.) Percentage chances give the number of times a particular outcome can expected in an event with multiple iterations. This doesn’t make them “wrong” when applied to single-event scenarios, but highly problematic in their interpretation. Either Trump or Hillary has to win; you can’t have a percentage outcome. So if you’re told that there is a 99% chance, or an 80% chance (Silver) that Hillary will win, that is understood to mean that Hillary is going to win.

        An even bigger problem is that, as was indicated, there is in fact lots of guesswork, many judgment calls, etc. — and necessarily extremely incomplete modeling of reality — that goes into the creation of those numbers. So there is a false precision associated with them. It’s nothing at all like saying what the odds are of winning the lottery, an extremely controlled situation where there are few variables and all are known. There’s a saying “numbers don’t lie,” but numbers lie all the time. They aren’t necessarily better than a wise, experienced, informed observer’s seat-of-the-pants judgment call. Yet they are often treated as if they are “objective,” not the products of human judgment.

      • That’s not a proper analysis, it doesn’t matter how many other people play in this example… there is 1 person playing the Powerball – you. The model says there’s a 1 in 292 million chance. You win. Does that mean the model is wrong? The numbers did not lie, those were indeed the odds, and yet the outcome was one which has a very low probability of happening.

        My argument is that people are improperly equating a higher probability to certainty, and that’s not how probability works. The problem with doing so is that then people will use one or two examples of how models didn’t “work” and use that to dismiss data science.

      • You should probably take a course in statistics. If you flip a coin and it’s heads 50 times in a row, what are the odds on the 51st toss?

      • I build a model that predicts tails at a 50% chance. This article and many comments claim that the model is wrong because it didn’t predict the actual outcome (50 times it didn’t predict it in your example). Who needs to better understand statistics?

      • It depends. If you’re a Bayesian, you would have adjusted your prior over the previous 50 flips and concluded that it’s likely to be a biased coin so that there is a much greater than 50% chance of heads. OTOH, if you know with certainty that it’s a fair coin (leaving aside how you would know this), then the probability of heads is 50%.

        Based on your “course in statistics,” what do *you* think the odds are?

      • The odds are the same as the first 50 flips; 50/50.

      • His explanation is correct. You may need to take a course in statistics.

        Nate leans hard left. However, his model is fine if you understand what you are reading. I believe the models had Trump sitting around 30%ish to win. So every time you see polls (which are his input) like what we saw then slightly less than 1 out of every 3 times the candidate that was trailing will win.

        This was one of those times.

        The polls themselves seemed to have flaws in the distribution of where voters would turn out. The +2ish% Clinton lead was close to accurate. The distribution of votes in Strong D vs close win states for Trump was wrong.

      • The Princeton Election Consortium ignored the Bradley effect for Hillary Clinton.

      • Of course. Many models are crap. But Silver’s, in general, are pretty good. And he tends to be up front on how they are derived and what their uncertainties are.

  12. This is correct. It isn’t just that Silver is wrong, and spectacularly wrong, empirically, it’s that he is wrong conceptually, from the very git go, and through and through, the very warp and woof, from beginning to end. It’s wishful thinking masquerading as science. It is scientism, the very opposite of science. It’s superstition and ignorance pretending to be rigorous intelligence. It’s a fraud, an utter fraud. And ultimately a failure.

  13. The main issue, and the author alludes to it, is how crude statistics are in these types of situations. There are so many variables they disregard — variables that have the power to influence outcomes at least as much as historical arithmetic results that are the sole basis for Silver’s predictions. Take the Super Bowl. Or any sport. Why do so many of us love sports so much? In part because there are so many variables and we have no idea which ones will appear or dominate to influence the outcomes. Did Silver’s 99% prediction of the Falcons’ win take into account that, despite the 28-3 score, the Falcon’s defense had been on the field twice as long as the Patriots? That Atlanta’s coaching staff was less experienced and more likely to make bad decisions? That injuries were likely occurring on both sides no one even knows about? That New England, Brady and Belichik play in this game every couple of years, while this Atlanta team never has and may have the psychological weight of Atlanta’s miserable sports championship history? That Edelman would make a ridiculous catch? Applying primitive stats to this kind of event, even more than the election, is just silly. And for fun, let’s not rule out the mystical trend in the last year: Cleveland for the first time in NBA history comes back from a 3-1 deficit to win the finals against a team with the best season’s record in history. The Cubs come back from 3-1 to beat the Indians in the World Series. Trump wins primary then general election. Based on that history, maybe the obvious bet in the third quarter was on the Patriots. Dozens or hundreds of variables affect these types of outcomes, and these boring statisticians use 2 or 3, then when they get nervous subjectively choose one or two additional variables (out of the dozens or hundreds) to adjust their predictions.

    • It was not the inexperience of the Falcons coaching staff that led to their bad decisions. Even a Pee-Wee football coach knows to run the ball with a big lead. They were just idiots and gave the game to the Patriots, and no one could predict such monumental stupidity.

      • Yep. You need a field goal to pad your lead. Instead of running to keep your field position you try to pass. The QB is sacked for a loss that puts you out of field goal range.

        Monumental Stupidity indeed.

      • All they had to do was call 2 more running plays in the second half and Pats would have run out of time…

    • 99% sounds correct for a team up 28-3 at the half in the Super Bowl. People should not expect models to tell them what will actually happen – they can only give you some probability of what will happen, and with all probability there is uncertainty.

  14. On my way out at halftime, I said to a Falcon-loving friend “look at it this way: if one team could score 18 points more than the other team in the first 30 minutes, how crazy would it be for the other team to do the same thing in the last 30 minutes? In which case you’d have a tie game, right?” At 30,000 feet, you could even call that 50-50, couldn’t you? The fat lady doesn’t give a damn about odds, statistics, goat curses and all the idiotic things the coiffed talking heads confidently pronounce like “no team has ever come back from behind by X points (or runs) with just Y left to play.” Last time you heard that, did you say to yourself “OK then, why exactly are we still watching?”

  15. This is a dumb article. Silver did not give a prediction, he was laying out the probabilities. The fact that what was less probably happened doesn’t mean he was wrong. If you tried to make a basketball shot from the other end of the court, the odds are probably 95 percent that you won’t. If you make it, it doesn’t make the odds wrong, it just means that you beat the odds. The odds were against Trump winning. The odds were very small that the Patriots would win the Super Bowl. The fact that both did win did not mean that the odds were wrong, just that they beat them. And the Patriots didn’t win because of some unique steel determination. They won because the Falcons coaches made some unbelievably idiotic decisions.

  16. Exactly right.
    The problem of course is that we confuse the notion of probability with certainty, prediction with guarantee.

    And all that relates to risk, which we have come to hate so much we refuse to recognize it.

    You mean I can spend all this money on college and NOT get a high-paying job that allows me to step into my high-quality life with my high-quality wife, and 2 very high-quality kids? Well, what’s up with that??

    I want my investments to grow by double-digits, and I’d like that guaranteed, please, so I can retire at 45 & play golf at Pebble Beach. Now please make that happen for a reasonable fee Mr. Madoff!

    Dontcha just love living right on the beach?! What a view? What a wonderful life! Wait a minute? You mean this is a storm surge flood zone? You mean this house will probably be drowned every 10-20 years or so? Well, that’s not right. What doesn’t the Ocean understand? I want my gorgeous house with gorgeous views and I don’t want any damn storm surge and I don’t want to buy flood insurance. Now let’s get my government to work!

    So since we hate risk, and look on high probabilities as the same as money-back guarantees, it’s no wonder we choose to believe what tells us we’re right to believe it. And so we sue the College when we don’t get that job; we switch brokers when they don’t get that return; and we buy ocean-front property cause it’s not flooding now (and now is what’s important). Besides, the odds are really good that tomorrow will be pretty much just like today….the Falcons will triumph….and Hillary is a lock!

  17. Poor little fat Nate…. probably cut from every tryout in high school he saught redemption in the dumpster of liberal news….only to lose again

  18. The fact the Nate Silver always “hedges,” as his detractors say, is why I like him. Everything is expressed in probabilities, with appropriate statistic analyses supporting it. I find them much more useful than either the use of percentages without a proper analysis, or the more common approach of forcing things into “it will be outcome A.” Nate Silver gets it right, long live Nate Silver.

    • In the first place, if you had simply focused on the raw state polling averages (as reported in Real Clear Politics) you would have done better than by listening to Nate Silver — the raw averages had a toss-up election, and did so not just the day before the election but most of the time for months before.

      In the second place, there was no mention in this article of how spectacularly wrong Silver was in 2010, when, at least until a couple days before the election, he had the chances of Republicans winning the House at virtually zero. (They didn’t just win it; the won it in an historically large blowout.) It doesn’t matter if someone says, “X has a 99% chance of winning” something or “X is almost certainly going to win” something: they’re making a prediction about a single event either way.

      Actually, Congressional elections, and Presidential elections, are amalgamations of multiple elections, so Silver wasn’t just off on single events, but on many discrete events.

      Silver’s probabilities, at least re big things, vary far too much from what actually bears out. The fact is that no matter what he calls himself he is one more liberal-Democrat pundit, and he tends to suffer from the same personal bias as the others, which gives his probabilities a systemic bias toward Democratic wins.

      • RCP had Clinton winning by 3.3 points on Election Day. About where Nate had it.

  19. Someone share this with the climate-change Kool-Aid drinkers over at Reddit. They’re truly insane over there.

  20. Trump of course had his own data team. Their research, right up to election day, told them they had the election. It helped that they personally contacted and polled voters in critical areas. Areas Hillary was so confident in she didn’t campaign there. I recall a post election CNN interview with Trump data engineer Witold Chrabaszcz who said that even on election morning he was practically giddy knowing they had it in the bag. Big data comes from individual data points, one of the lessons of this election is those data points must be recognized and respected.

    • So I guess you missed the insider comments or even Mr Trump’s look when told he won. They absolutely knew him winning was a long shot. Like most Trumpsters, Mr C was “rewriting” history after the fact.

      • Mr. C. as you refer to him was quite honest in his comments, IMO. You have your opinion. He was also very candid in saying the rest of the Trump team didn’t share his confidence. Very accurate name you chose for yourself.

  21. Silver crunches and analyzes numbers to find correlations and calculate probabilities. When applied to election polling, that output is only as good as the quality of the inputs, and the polling data itself was skewed. It doesn’t even have to be deliberately so, just that the pollster’s models were not picking up on a political realignment. The data guru types have to note that catching enough data points representing enough demographic factors just can’t be done by polling a few hundred or even a couple thousand people at random.

    Likewise, when modeling a sporting contest, the score is a “lagging indicator” that only models how well the teams have performed up to a point in time. Once a losing team changes strategy to counter the opponent’s strength, it can change. It’s all a matter of whether the trend line holds, and is nearly impossible to model.

  22. This argument is quite silly. Regarding football, there have been around 500 playoff games, and in only 4 of them has a team come back from more than three touchdowns behind. A team trailed by more than three touchdowns in only some of those games, but a 1% chance of winning just means that if you played a similar game 100 times, you would expect a team to come from that far behind in only 1 of them. You expect it will happen some times, but not very often. Isn’t that exactly what most of us thought when watching the game? The Patriots might come back, but it’s not very likely. I don’t know the people who suggested the 1% statistic to be an “unassailable predictor”. However, if anyone did say such a thing, it wasn’t a person like Nate Silver who understands statistics. A 1% chance of victory isn’t an unassailable predictor of an outcome, a 0% chance is. A 1% chance means you expect a comeback to happen in similar circumstances, but not very often.

    With respect to analyzing elections, the argument combines common misunderstandings of statements about statistics with an unwillingness to represent Silver’s record accurately. 48 hours before the election, Silver suggested a 35% chance of Trump winning and on the day of the election a 71% chance of Trump winning. In articles within three days of the election, he said:

    “But the public polls — specifically including the highest-quality public polls — show a tight race in which turnout and late-deciding voters will determine the difference …”
    “In some ways, our fundamental hypothesis about this campaign is that uncertainty is high, with both a narrow Trump win and a more robust Clinton win — in the mid-to-high single digits — remaining entirely plausible outcomes”
    “We think this is a good year for a forecast that calls for more caution and prudence.”
    “In three of the last five presidential elections, in other words, there was a polling error the size of which would approximately wipe out Clinton’s popular vote lead — or alternatively, if the error were in her favor, turn a solid victory into a near-landslide margin of 6 to 8 percentage points. ”
    “the number of undecided and third-party voters is much higher than in recent elections, which contributes to uncertainty.”
    “Clinton’s coalition — which relies increasingly on college-educated whites and Hispanics — is somewhat inefficiently configured for the Electoral College, because these voters are less likely to live in swing states. If the popular vote turns out to be a few percentage points closer than polls project it, Clinton will be an Electoral College underdog.”
    “The goal of a probabilistic model is not to provide deterministic predictions (“Clinton will win Wisconsin”) but instead to provide an assessment of probabilities and risks.”

    The short version: Clinton is more likely to win; Trump has a good chance to win and it’s not that rare an event if he does; a polling error very similar to polling errors in the past, combined with Clinton’s inefficiently distributed electoral coalition, is a probable mechanism for a Trump victory. That’s what happened. Silver’s analysis was excellent, and someone paying attention to it wouldn’t make the mistakes alleged in this article. The writer seems to assume that anyone who suggests that the probability of an outcome is less than 50% has failed if the event occurs, which simply misunderstands how statistics works. It’s appropriate to criticize people who read, but misunderstood, Silver and turned a 1-3 to 1-4 chance of a Trump victory into a certain Trump loss. It’s foolish to include in that criticism an analyst who did excellent work on the election and identified any legitimate error identified in this article.

  23. The reality of the situation is that the statistical predictions are based on data which may have just a limited predictive power (e.g. a poll is different from an election – it has some pertinence but is limited). Further predicting from polls carries an inevitable assumption that the conditions producing the polls used for prediction will not change. The Patriots-Falcons game was a case in point. Based on the first half things looked grim for the Patriots. But the second half was different from the first half because two big things changed – both related to better Patriot’s coaching – Brady figured out how to counter the vast Falcon linebackers using several means and got his passing game back on track and Belichick’s defense finally figured out how to read the Falcon’s offense to break up their big plays. This is not a surprise – Brady has come back from 21 point deficits 7 times in his careers and Belichick makes a habit of having a defense that gets more and more effective as the game progresses.

    Looking at predictions derived from one condition and assuming they have any power to predict what will happen in different conditions is a fool’s errand.

  24. The problem with any statistical inference is the quality of the dataset used. Most political polling is at best dodgy because they have no idea who is going vote. At best they gaze their navels and make a guess as to what the electorate will be on election day. Plus the more personal the poll, the less likely people are to answer or answer truthfully.

  25. Little forgotten facts about the popular vote: Gary Johnson took 3.9 million votes and it is easy to figure that most would have gone to Trump. Stein got 1.2 million votes – most would have went to Clinton.

    Clinton had 91% of favorable press which was on a mission to destroy Trump. In advertising dollars the amount that exposure was worth to Clinton is astonomical.

    Clinton spent $500 million more than Trump. Simple fact.

    Total congressional votes were 3 million more to Republicans.

    Total Senate votes – less California where there was no Senate candidate – was 3 million more Republicans.

    Lastly – if Trump showed up to campaign in California and New York for popular votes, promising to double food stamps and to quickly build millions of new public housing projects one can easily imagine he would have won the popular vote by millions.

  26. Silver has proven adept only at predicting Obama Victories.

  27. Statistics can never factor in human determination, drive, and ambition.

  28. It’s just new bullshit. That little book from a decade or so ago needs to be updated. As to citing Silver being a mic drop, where and with whom were these conversations?

  29. Many people wrongly believe that probabilities are absolute figures like weight or length. Something that can be calculated, Perhaps that’s because in the very special case of gambling casinos, probabilities can be precisely calculated. But outside of that artificial situation, any probability is merely an output of a model.

  30. Except Silver was right in both instances! Trump won in part because Hillary supporters in the media made her supporters complacent with data alalyzed in an extremely biased manner. Silver consistently gave Trump a decent shot at winning in large part because he ignored the data in the primary and he was proven wrong and then course corrected.

    That said, Silver has negatively influenced coverage because his meteoric rise has led many to incorporate his techniques while distorting the data. The same thing has influenced discussions on baseball greats. Sabermetrics is great for figuring out the best way to spend a $50 million payroll budget but HRs and RBIs work fine for figuring which ball players are HOFers.

    So Hillary’s campaign became too focused on the data and didn’t do enough of the old school politics because of Silver and Obama’s 2008 data operation. Obama’s real strength was his voter registration drive that he learned in 1991 registering voters to help Carol Moseley Braun win.

  31. Probability is best applied to repeatable, time-independent events – flipping a coin or rolling a die. A presidential election is closer to a horse race. You place a bet and either lose your bet or win an amount determined by the odds offered.

    The Clinton-Trump race was time dependent. How many times did you hear, “…who would (most likely) win if the election was held today?”

    Let’s say Clinton and Trump played 100 tennis matches; Clinton won 20, Trump won 80. You might ask, who will win the 101st match, what is the probability of Clinton winning the 101st match? You could say with some confidence, 20%.

    Let’s say after 1,000 matches Clinton won 600 and Trump won 400. You might ask, who will win the 1001st match, what is the probability of Clinton winning the 1001st match? You could say with some confidence, 60%.

    The Clinton-Trump tennis example is repeatable and time-dependent (the probabilities change with time. The probability of getting heads on a fair coin will be 50% in 2017, 50% in 2027… The event of flipping a fair coin is time independent).

    Applying a probability to a one-time, time-dependent event is meaningless.

    • I think Silver runs simulations to determine the probability. I think Silver was right that Trump had a 30% of winning and I liken it to Trump being down by 3 with 2:10 to play with the ball.

      Which is why the Falcons should have let the Pats score once they got in the red zone because the probability of the Pats getting the 2 point conversion and then Atlanta NOT being able to get a FG with 2 minutes on the clock was lower than the alternative.

    • “Applying a probability to a one-time, time-dependent event is meaningless.”

      Sort of. I think a more accurate statement is: Using a one-time event to judge the validity of a probabilistic model is erroneous.

      In order to evaluate a model or a modeling approach, you need many, many events. It’s OK if each of the events is a one-time thing because in aggregate, enough of them will allow you to see whether the modeling approach gets the right answer a percentage of the time that is consistent with its probability estimates.

  32. I always thought Silvers consistent hedging was a pretty good sign that Trump was going to do much better than people expected. What’s interesting to me is Hillary turned out about just as many voters as Obama did in 2012. It’s just that Trump got about 2 million or so more than Romney, and he did this is all the right places.

  33. Nate Silver ignored the Tom Bradley effect and that was where all the errors arose in all his polls. The Bradley effect was about 4.5 to 5 percent in the case of Hillary Clinton versus Donald Trump, nearly 30 to35 percent of the actual Bradley effect of 14 percentage points between Tom Bradley and George Deukmejian in the polls.. I estimated that the Bradley effect would be lower in the case of Clinton because she was not herself a black or minority candidate like Bradley. My estimate proved to be more accurate than Nate Silver’s electoral poll predictions.

  34. tl; dr version: people suck at understanding probabilities, particularly for tail events.

  35. For the most part, neither Silver’s critics nor his fans understand probability. Silver’s final model gave Hillary a roughly 2/3 chance of winning. But he was not “wrong” about the election, any more than someone who predicts a rolled die has only a 33% chance of coming up 1 or 2 is wrong when the die comes up “2”. The only way to assess the accuracy of Silver’s predictions is over the long term — that is, do the actual outcomes on average roughly match his probability assessments? I haven’t checked to see, but it is not valid to cite the election and an unprecedented Super Bowl comeback as refuting Silver’s approach.

    The problem here is with idiots who equate likelihood with certainty.

  36. Are Nate Silver and 539 still around? Good grief. NEVER USE POPULAR VOTES TO PREDICT ELECTORAL COLLEGE VOTES.

    What a charlatan! Better than any magician you will ever see! Star of statistics! Sleight #1 of false science, right up there with NOAA climate science stats inventors.

    Dumb! False science!!! God, some elitists are really dumb.

    Ok, let’s get it right for once, can we, after the election let’s take apart the major ways this idiot Nate Silver abused science.

    Rule # 1: Bait-and-switch means give a bit of truth, then switch to total garbage. Tease a rabbit with veggies, so she can get the noose around herself and strangle.

    So Nate fibber gets some polls together from many states. But he never gives the basis for the polls, never inquires for bias. Not our Nate. Looks quasi OK so far, right?

    Ah-ha! Now comes the switch. Silverman charlatans two things:
    1) he “adjusts” the raw polls – hey did you hear NOAA “rejected buoy readings, and then “adjusted upward” the ship reading values” OOOPS. GIGO ALERT!!!!
    2) Here is the real switch – builds a super model, to figure out the results for the USA as a whole. Does he reveal his model? Of course not. Did NOAA reveal their 2015 climate model? Of course not, in fact, they lost the data.

    3) Then Nate-not-a-statistician goes further – he gives a probability for a Hillary win of the popular vote!!!!

    Folks, this “model” is extrapolating from a bunch of “margin of error numbers”. Sorry, can’t do it. Not allowed. Statistical bullcrap. Dumb!

    But the real charlatan lies beyond. Nate Silverman graduates to double-charlatan.

    Silverman insists that his jittery model result balancing 1000 plates and setting a US projection above it, is valid for the US election. Nope, not at all. The Electoral College is what determines a winner. Nate should have projected Electoral College win chances in each state (good luck with that), then linked the binomial probabilities for each state together. A real big calculation – easily done by a computer – killed before it begins by the total unreliability of the input AND NO WAY OF GETTING VALID INPUT.

    Nate Silver (Silverman?) went the wrong way from the beginning folks. He predicted popular votes instead of Electoral votes. I am sure EVERY Democrat knows the difference between those results now, right?

  37. God I love alternate facts. God bless you all. Hail Trump – Hail Sessions – Hail Ryan.

  38. Nate (Sharlatan) Silver uses bad science, with a “bait-and-switch” attraction – use truth first then lie your way to money. This ainl;t science. But like NOAA and the climate lies, Nate hurt a lot of people.

    Folks, science is not bad. Nate absolutely misused science to give you the prediction you wanted. Exactly like NOAA is now shown to have been doing in 2015.

    1)You can’t use a popular vote projection to predict an Electoral College prediction. Can;t. Ever. Do. It. But Sharlatan Nate did it.
    2) You can’t “adjust individual results in your model without revealing WHY you made the adjustments, and what the “adjustments” were. EVER. Might as well wet your finger ( I mean put it in your mouth, dummy), and hold it up in the air.

  39. They use it as a tool to discourage participation on the right. The assumptions plugged into the models always seem to favor leftists. This isn’t a bug, it’s a feature.

  40. I think Silver is the wrong target here. He was savaged by others among the daterati for saying Trump had a very real chance to win. Nate is always the most circumspect about his prognostications, unlike say Ryan Grim or Sam Wang at HuffPo. Plus, his forecasts in 2008 and 2012 were nearly perfect. I learned my lesson about disparaging Silver after 2012 and the unskewed polls fiasco. As far as I’m concerned, 2016 reaffirmed that when in doubt, trust Nate Silver.

  41. The problem is Genesis 3, i.e “you will be like God, knowing good and evil.” The solution is humility in the face of our finitude and acceptance that we are not in fact God. All human misery and data driven hubris comes down to this.

  42. Statistics lie and liars statistic.

    Liar Nate Silverman pretended the popular vote was a proxy for the Electoral Vote. EVERY. LIBERAL. KNOWS. HE. LIED. WITH. STATISTICS.

  43. I applaud the use of actual data for predictive purposes BUT his algorithms seem to need a lot of work……

  44. I don’t really see the point of this article. The use of numbers, whether in the areas of simulation, statistics, or data mining, is only as reliable as the person doing the math. That’s always been true. If a person is sloppy or biased, then the results will be unreliable. Speaking of which, the author is quite sloppy in stating that the score of Super Bowl LI was 28-3 at halftime. It was not. It was 21-3. This is an ironic and embarrassing mistake in an article like this.

  45. In the Super Bowl, if down 24 to 3 at the half, the odds are very high you will lose. It’s not 100%. Nate provides probabilities but everyone assumes they are absolute certain predictions. He gave Trump a 30% chance and based on the polls that made sense. In the exact same polling situation, the person in Trumps shoes wins about 1/3 of the time. It just so happened that this was one of those times. 30% doesn’t mean never.

  46. Oh yeah, “Data Driven”. That’s what he does. LOL.

  47. For 6 months he had Trump hovering around 10-30%. I think even lower at times.
    In the last 2 weeks he had Trump near 30. So He was wrong ALL year – Trump was never at friggin 10%,20%… – but somehow he still has cred. Oh “everyone was wrong though” – except everyone wasn’t. Everyone the libs respect was wrong – that’s true.

  48. Excellent post. Life is by nature unpredictable and so statistical analysis and computer models that “predict” the future are simply an attempt to make some sense of chaos for us, but like venturing to Delphi, in the end there is no way of predicting the rare or even unique events that throw a cuve ball into the statistical analysis. For example, the global warming models are all based on a number of assumptions that may or may not be correct for a system with a staggering number of variables, many of which scientists don’t yet understand, at attempt to predict a future in a system that is in the end chaotic and where all the assumptions may be overturned by that big orb in the sky.

  49. Great, great article. I love data and I fully know its limits. Figures lie and liars figure. The main takeaway is numbers are a tool that pries open hidden truth …. but like all tools, an imperfect one. Falling in love with a chain saw to cut all wooden objects leaves the toothpick-maker limbless.

  50. Here’s the joke in my chosen profession. Ask an engineer what 2 plus 2 equals and they will answer 4.00000. Ask a geologist and he will say “around 4”. Ask a geophysicist and he will say, “what do you want it to be?”. Same holds for statisticians…….”what do want the answer to be?”. I am all for analytics but there are real limitations, especially when you are talking about human behavior. Taking historical data regarding human decisions and projecting them into the future is fraught with danger and wildly prone to error.

  51. “In both cases, statistical models were held up as unassailable predictors. And in both instances, they were wrong.”

    You are completely mischaracterizing what Silver is doing. He is not making predictions. He is assigning probabilities. Huge difference.

    • You are correct but remember, Silver’s non de plume is taking his probabilities and drawing conclusions from them, often with a great deal of certainty. Yes, he always adds some minor qualification to his conclusions but he is pretty affirmative. Silver reminds me of the ads from the guys that correctly predicted the last correction in the stock market. Of course what they always fail to mention is the previous six corrections they predicted that didn’t happen but when they finally get one right they hold themselves out as some sort of Nostradamus. Silver’s work is really good but he is clearly fallible.

  52. This is directed to Insidious Pall

    Every Democrat now knows that Nate Silver built a really bad
    model. He projected the popular vote, and was close. But by saying he was
    predicting a winner 70%, he pretended he was predicting the Electoral College
    vote, and every Democrat now knows the difference, and knows why Nate’s model
    was misleading and dead wrong.

    Ok, since you appear to like statistical methods, I will lay
    out a bit for you. Your input is welcome. I am a Fellow of the Society of
    Actuaries, as well as a graduate in honors math, physics, and chemistry, and I
    am also a CFA charterholder. I have had extensive statistical training, would
    you grant that? I spend my time building models and checking for bias,
    confounding variables, randomness, and of course replicability. Not to mention
    pure hogwash, and uncovering statistical errors. It is my business. What you
    want voir dires on? Monte Carlo techniques, confidence intervals, data
    analysis, randomness testing, model critical review?

    How about you? It would help if I knew you were at an
    advanced level, or whatever.

    So what is wrong with the methods used by Nate Sharlatan?

    If all he did was say he was predicting the popular vote, I
    have no problem. Plus or minus 3%. If he wanted to state a probability of
    winning the popular vote, now we are in slightly worse territory, but still OK,
    sort of. Bluntly, if Hillary was ahead 3% in the popular vote, there is no
    model on Earth, except for Nate Sharlatan, that would give her a 70% chance of
    having a larger popular vote. She was within the margin of error. I won’t call
    this winning, because winning the popular vote is not winning. Look at the five
    past results where this was not true.

    But Nate Silverman stated that Hillary had a 70% chance of
    winning. Now he is into Electoral College territory, and this statement is
    hogwash. Any model for the Electroral College is doomed to failure before one
    begins. The biggest issue is that the states are NOT INDEPENDENT, and this
    assumption of independence is the basis for any binomial (win or lose)
    projection for the Electoral College votes for each state. A second assumption
    of randomness is also violated. A very thin polling base, by entities which
    themselves exhibit bias, coupled with a sampling technique by Nate Sharlatan
    which SELECTIVELY CHOOSES THE POLLS HE WANTS, absolutely kills the results.

    For example, you heard about the mess at NOAA on climate
    forecasts, where the biased scientists rejected the buoy data, and selected the
    (warmer) boat data, and even then adjusted the boat data up to get the “warming”
    results they wanted?

    Ok, forget all that reality. Just focus on margin of error
    for each state’s Electoral College vote, which is at least 3%. If we have 50
    measurements, and want to link them in our binomial model into a USA Electoral
    College prediction, we have to attach a confidence interval to that model,
    right? In science, the normal way is to assume the worst case. Add up the errors
    for each of the fifty measurements, ie range is 3% x 50 states = er, 150% this
    means her probability of winning of 70% has to have attached to it a confidence
    interval of plus or minus 160%, ie her chance was zero to 100% – this is what
    the model predicts.

    Even if he could cut this down to plus or minus 20%, the

    If anyone tells you they can predict the Electoral College
    vote, your answer ought to be – What is the confidence interval.

    PS One thing scientists do is to share the model, so others
    can take it apart themselves. This gives rise to constructive rebuilding of the
    model. Nate? Oh no, his model is proprietary,
    never to be seen (and critiqued) by others.

    He is making millions being a Sharlatan.

    • Actually, when you think about it, so did David Copperfield. At least Copperfield states he is an illusionist……