As others have said, the validity of studies varies considerably from field to field. One thing to watch for is the problem summarized by this comic:
If you can see that the designers of the study have looked for multiple correlations in their dataset, then they need to have corrected for the above phenomenon, otherwise their conclusions aren't worth it. There are statistical techniques designed to do this - one of the most common being the
Bonferroni correction. I will quite often look to see if the study indicates in their methodology that they've done something like this, and if they haven't then I'll probably disregard the results (at least until I see more studies that replicate their conclusions).
For polls like the one you've found, There's an issue of sample size in the subdivisions. The authors of the report have decided to breakdown their results by political orientation (presumably because they believe there is a correlation between political orientation and opinion on climate change). However, in each of the sub-categories there's a much smaller sample size, so the information about those sub-categories is more suspect. That in turn weakens any conclusion drawn from the overall dataset - it is acknowledged by the creators of the report that their sample is biased, so they need more data to be able to arrive at a legitimate sampling than they would have otherwise. An aggregation of similar polls would probably provide a better overall picture of the state of public opinion.
This is an extension of a general principle - when you're trying to do statistical analysis, you will almost always end up mapping your data onto a model of how you believe your measurable is distributed (you can't calculate things like p-values otherwise). Most of the time people pick a normal distribution as their basis, and there is some mathematical backing for defaulting to that (see the
Central Limit Theorem). But when you map onto a model, you're making a lot of assumptions. Many times those assumptions aren't quite right, introducing errors. You see this in most fields, but it has larger impacts in certain places than others (I'm looking at you, Economics!). Checking the fine print of the statistical techniques you're using is difficult, but it's also very important if you want to actually use them properly. Sadly most people don't bother...and that's why we have a reproducibility crisis in science and a lack of trust in scientific findings generally.
ST
Don't look now, but I think there's something weird attached to the bottom of my posts.