I'm sorry, but you've got this completely wrong. It is not the questions that are weighted, but the answers, depending on who actually completes the survey, and it is totally right and proper that they be weighted. This is to ensure that the published results properly reflect the population they seek to measure. Each polling company have huge panels (in the hundreds of thousands) whom they survey. The company knows all about these people - age, sex, class, location etc. etc, so they can send their surveys to as representative a sample as possible.
As a basic example as to why weighting has to happen, men comprise 48% of the electorate and women 52%. Suppose the raw figures, despite the pollsters' best efforts, contain 50% men and 50% women. The polling company will slightly "downweight" the replies given by the men (so that the replies of 50 men count as if they were 48) and slightly “upweight” the replies given by women (so that the replies of 50 women count as if they were 52). Of course it's not just gender that has to be considered, but also age, income, region among many others all have to be considered simultaneously.
The size of the poll is irrelevant, so long as it's not below about 500. The margin of error on a survey of 1,000 is just 3%. The vast majority of pollsters use a sample size of 1,000 to 2,000, so this particular poll of over 5,000 is actually much bigger than usual and would have a miniscule margin of error.