Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

Welcome to the Hardcore Husky Forums. Folks who are well-known in Cyberland and not that dumb.

TAKE THE FUCKING POLE

AZDuckAZDuck Member Posts: 15,381
edited April 2014 in Hardcore Husky Board
ANSWER THE FUCKING QUESTIONS

This is off-season gold:

https://memphishealthsport.qualtrics.com/SE/?SID=SV_01WjKhY76XXWfLn

I answered the first one seriously, but I will be having some fun with this later.

image

""Fans of the Washington Huskies do not show respect for others" - my response is pretty much based on this bored

Comments

  • HeretoBeatmyChestHeretoBeatmyChest Member Posts: 4,295
    I'll do the pole if you toss my salad first.
  • ApostleofGriefApostleofGrief Member Posts: 3,904
    Opinion poll
    From Wikipedia, the free encyclopedia

    An opinion poll, sometimes simply referred to as a poll, is a survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals.
    Contents
    1 History
    2 Sample and polling methods
    2.1 Benchmark polls
    2.2 Brushfire polls
    2.3 Tracking polls
    3 Potential for inaccuracy
    3.1 Nonresponse bias
    3.2 Response bias
    3.3 Wording of questions
    3.4 Coverage bias
    4 Failures
    5 Influence
    5.1 Effect on voters
    5.2 Effect on politicians
    6 Regulation
    7 See also
    8 Footnotes
    9 References
    10 External links


    History

    The first known example of an opinion poll was a local straw poll conducted by The Harrisburg Pennsylvanian in 1824, showing Andrew Jackson leading John Quincy Adams by 335 votes to 169 in the contest for the United States Presidency. Since Jackson won the popular vote in that state and the whole country, such straw votes gradually became more popular, but they remained local, usually city-wide phenomena. In 1916, the Literary Digest embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted Woodrow Wilson's election as president. Mailing out millions of postcards and simply counting the returns, the Digest correctly predicted the victories of Warren Harding in 1920, Calvin Coolidge in 1924, Herbert Hoover in 1928, and Franklin Roosevelt in 1932.

    Then, in 1936, its 2.3 million "voters" constituted a huge sample; however, they were generally more affluent Americans who tended to have Republican sympathies. The Literary Digest was ignorant of this new bias. The week before election day, it reported that Alf Landon was far more popular than Roosevelt. At the same time, George Gallup conducted a far smaller, but more scientifically based survey, in which he polled a demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The Literary Digest soon went out of business, while polling started to take off.

    Elmo Roper was another American pioneer in political forecasting using scientific polls.[1] He predicted the reelection of President Franklin D. Roosevelt three times, in 1936, 1940, and 1944. Louis Harris had been in the field of public opinion since 1947 when he joined the Elmo Roper firm, then later became partner.

    In September 1938 Jean Stoetzel, after having met Gallup, created IFOP, the Institut Français d'Opinion Publique, as the first European survey institute in Paris and started political polls in summer 1939 with the question "Why die for Danzig?", looking for popular support or dissent with this question asked by appeasement politician and future collaborationist Marcel Déat.

    Gallup launched a subsidiary in the United Kingdom that, almost alone, correctly predicted Labour's victory in the 1945 general election, unlike virtually all other commentators, who expected a victory for the Conservative Party, led by Winston Churchill.

    The Allied occupation powers helped to create survey institutes in all of the Western occupation zones of Germany in 1947 and 1948 to better steer denazification.

    By the 1950s, various types of polling had spread to most democracies.
    Sample and polling methods

    Voter polling questionnaire on display at the Smithsonian Institution

    Opinion polls for many years were maintained through telecommunications or in person-to-person contact. Methods and techniques vary, though they are widely accepted in most areas. Verbal, ballot, and processed types can be conducted efficiently, contrasted with other types of surveys, systematics, and complicated matrices beyond previous orthodox procedures.[citation needed]

    Opinion polling developed into popular applications through popular thought, although response rates for some surveys declined. Also, the following has also led to differentiating results:[1] Some polling organizations, such as Angus Reid Public Opinion, YouGov and Zogby use Internet surveys, where a sample is drawn from a large panel of volunteers, and the results are weighted to reflect the demographics of the population of interest. In contrast, popular web polls draw on whoever wishes to participate, rather than a scientific sample of the population, and are therefore not generally considered professional.

    Recently, statistical learning methods have been proposed in order to exploit Social Media content (such as posts on the micro-blogging platform of Twitter) for modelling and predicting voting intention polls.[2][3]

    Polls can be used in the public relation field as well. In the early 1920s Public Relation experts described their work as a two-way street. Their job would be to present the misinterpreted interests of large institutions to public. They would also gauge the typically ignored interests of the public through polls.
    Benchmark polls

    A benchmark poll is generally the first poll taken in a campaign. It is often taken before a candidate announces their bid for office but sometimes it happens immediately following that announcement after they have had some opportunity to raise funds. This is generally a short and simple survey of likely voters.

    A benchmark poll serves a number of purposes for a campaign, whether it is a political campaign or some other type of campaign. First, it gives the candidate a picture of where they stand with the electorate before any campaigning takes place. If the poll is done prior to announcing for office the candidate may use the poll to decide whether or not they should even run for office. Secondly, it shows them where their weaknesses and strengths are in two main areas. The first is the electorate. A benchmark poll shows them what types of voters they are sure to win, those who they are sure to lose, and everyone in-between those two extremes. This lets the campaign know which voters are persuadable so they can spend their limited resources in the most effective manner. Second, it can give them an idea of what messages, ideas, or slogans are the strongest with the electorate.[4]
    Brushfire polls

    Brushfire Polls are polls taken during the period between the Benchmark Poll and Tracking Polls. The number of Brushfire Polls taken by a campaign is determined by how competitive the race is and how much money the campaign has to spend. These polls usually focus on likely voters and the length of the survey varies on the number of messages being tested.

    Brushfire polls are used for a number of purposes. First, it lets the candidate know if they have made any progress on the ballot, how much progress has been made, and in what demographics they have been making or losing ground. Secondly, it is a way for the campaign to test a variety of messages, both positive and negative, on themselves and their opponent(s). This lets the campaign know what messages work best with certain demographics and what messages should be avoided. Campaigns often use these polls to test possible attack messages that their opponent may use and potential responses to those attacks. The campaign can then spend some time preparing an effective response to any likely attacks. Thirdly, this kind of poll can be used by candidates or political parties to convince primary challengers to drop out of a race and support a stronger candidate.
    Tracking polls

    A tracking poll is a poll repeated at intervals generally averaged over a trailing window.[5] For example, a weekly tracking poll uses the data from the past week and discards older data.

    A caution is that estimating the trend is more difficult and error-prone than estimating the level – intuitively, if one estimates the change, the difference between two numbers X and Y, then one has to contend with the error in both X and Y – it is not enough to simply take the difference, as the change may be random noise. For details, see t-test. A rough guide is that if the change in measurement falls outside the margin of error, it is worth attention.
    Potential for inaccuracy

    Polls based on samples of populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. The uncertainty is often expressed as a margin of error. The margin of error is usually defined as the radius of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. Others suggest that a poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population.

    A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the 95% confidence interval of the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people.[6] In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500–1,000 is a typical compromise for political polls. (Note that to get complete responses it may be necessary to include thousands of additional participators.)[7]

    Another way to reduce the margin of error is to rely on poll averages. This makes the assumption that the procedure is similar enough between many different polls and uses the sample size of each poll to create a polling average.[8] An example of a polling average can be found here: 2008 Presidential Election polling average (http://www.daytodaypolitics.com/polls/presidential_election_Obama_vs_McCain_2008.htm). Another source of error stems from faulty demographic models by pollsters who weigh their samples by particular variables such as party identification in an election. For example, if you assume that the breakdown of the US population by party identification has not changed since the previous presidential election, you may underestimate a victory or a defeat of a particular party candidate that saw a surge or decline in its party registration relative to the previous presidential election cycle.

    Over time, a number of theories and mechanisms have been offered to explain erroneous polling results. Some of these reflect errors on the part of the pollsters; many of them are statistical in nature. Others blame the respondents for not giving candid answers (e.g., the Bradley effect, the Shy Tory Factor); these can be more controversial.
    Nonresponse bias

    Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population due to a non-response bias. Because of this selection bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes, because taking a larger sample size simply repeats the same mistake on a larger scale. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own techniques for adjusting weights to minimize selection bias.[9]
    Response bias

    Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like racism or sexism, and thus polls might not reflect the true incidence of these attitudes in the population. In American political parlance, this phenomenon is often referred to as the Bradley effect. If the results of surveys are widely publicized this effect may be magnified - a phenomenon commonly referred to as the spiral of silence.
    Wording of questions
  • AZDuckAZDuck Member Posts: 15,381

    Opinion poll
    From Wikipedia, the free encyclopedia


    An opinion poll, sometimes simply referred to as a poll, is a survey of public opinion from a particular sample.

    Doog.

  • TierbsHsotBoobsTierbsHsotBoobs Member Posts: 39,680

    Opinion poll
    From Wikipedia, the free encyclopedia

    An opinion poll, sometimes simply referred to as a poll, is a survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals.
    Contents
    1 History
    2 Sample and polling methods
    2.1 Benchmark polls
    2.2 Brushfire polls
    2.3 Tracking polls
    3 Potential for inaccuracy
    3.1 Nonresponse bias
    3.2 Response bias
    3.3 Wording of questions
    3.4 Coverage bias
    4 Failures
    5 Influence
    5.1 Effect on voters
    5.2 Effect on politicians
    6 Regulation
    7 See also
    8 Footnotes
    9 References
    10 External links


    History

    The first known example of an opinion poll was a local straw poll conducted by The Harrisburg Pennsylvanian in 1824, showing Andrew Jackson leading John Quincy Adams by 335 votes to 169 in the contest for the United States Presidency. Since Jackson won the popular vote in that state and the whole country, such straw votes gradually became more popular, but they remained local, usually city-wide phenomena. In 1916, the Literary Digest embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted Woodrow Wilson's election as president. Mailing out millions of postcards and simply counting the returns, the Digest correctly predicted the victories of Warren Harding in 1920, Calvin Coolidge in 1924, Herbert Hoover in 1928, and Franklin Roosevelt in 1932.

    Then, in 1936, its 2.3 million "voters" constituted a huge sample; however, they were generally more affluent Americans who tended to have Republican sympathies. The Literary Digest was ignorant of this new bias. The week before election day, it reported that Alf Landon was far more popular than Roosevelt. At the same time, George Gallup conducted a far smaller, but more scientifically based survey, in which he polled a demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The Literary Digest soon went out of business, while polling started to take off.

    Elmo Roper was another American pioneer in political forecasting using scientific polls.[1] He predicted the reelection of President Franklin D. Roosevelt three times, in 1936, 1940, and 1944. Louis Harris had been in the field of public opinion since 1947 when he joined the Elmo Roper firm, then later became partner.

    In September 1938 Jean Stoetzel, after having met Gallup, created IFOP, the Institut Français d'Opinion Publique, as the first European survey institute in Paris and started political polls in summer 1939 with the question "Why die for Danzig?", looking for popular support or dissent with this question asked by appeasement politician and future collaborationist Marcel Déat.

    Gallup launched a subsidiary in the United Kingdom that, almost alone, correctly predicted Labour's victory in the 1945 general election, unlike virtually all other commentators, who expected a victory for the Conservative Party, led by Winston Churchill.

    The Allied occupation powers helped to create survey institutes in all of the Western occupation zones of Germany in 1947 and 1948 to better steer denazification.

    By the 1950s, various types of polling had spread to most democracies.
    Sample and polling methods

    Voter polling questionnaire on display at the Smithsonian Institution

    Opinion polls for many years were maintained through telecommunications or in person-to-person contact. Methods and techniques vary, though they are widely accepted in most areas. Verbal, ballot, and processed types can be conducted efficiently, contrasted with other types of surveys, systematics, and complicated matrices beyond previous orthodox procedures.[citation needed]

    Opinion polling developed into popular applications through popular thought, although response rates for some surveys declined. Also, the following has also led to differentiating results:[1] Some polling organizations, such as Angus Reid Public Opinion, YouGov and Zogby use Internet surveys, where a sample is drawn from a large panel of volunteers, and the results are weighted to reflect the demographics of the population of interest. In contrast, popular web polls draw on whoever wishes to participate, rather than a scientific sample of the population, and are therefore not generally considered professional.

    Recently, statistical learning methods have been proposed in order to exploit Social Media content (such as posts on the micro-blogging platform of Twitter) for modelling and predicting voting intention polls.[2][3]

    Polls can be used in the public relation field as well. In the early 1920s Public Relation experts described their work as a two-way street. Their job would be to present the misinterpreted interests of large institutions to public. They would also gauge the typically ignored interests of the public through polls.
    Benchmark polls

    A benchmark poll is generally the first poll taken in a campaign. It is often taken before a candidate announces their bid for office but sometimes it happens immediately following that announcement after they have had some opportunity to raise funds. This is generally a short and simple survey of likely voters.

    A benchmark poll serves a number of purposes for a campaign, whether it is a political campaign or some other type of campaign. First, it gives the candidate a picture of where they stand with the electorate before any campaigning takes place. If the poll is done prior to announcing for office the candidate may use the poll to decide whether or not they should even run for office. Secondly, it shows them where their weaknesses and strengths are in two main areas. The first is the electorate. A benchmark poll shows them what types of voters they are sure to win, those who they are sure to lose, and everyone in-between those two extremes. This lets the campaign know which voters are persuadable so they can spend their limited resources in the most effective manner. Second, it can give them an idea of what messages, ideas, or slogans are the strongest with the electorate.[4]
    Brushfire polls

    Brushfire Polls are polls taken during the period between the Benchmark Poll and Tracking Polls. The number of Brushfire Polls taken by a campaign is determined by how competitive the race is and how much money the campaign has to spend. These polls usually focus on likely voters and the length of the survey varies on the number of messages being tested.

    Brushfire polls are used for a number of purposes. First, it lets the candidate know if they have made any progress on the ballot, how much progress has been made, and in what demographics they have been making or losing ground. Secondly, it is a way for the campaign to test a variety of messages, both positive and negative, on themselves and their opponent(s). This lets the campaign know what messages work best with certain demographics and what messages should be avoided. Campaigns often use these polls to test possible attack messages that their opponent may use and potential responses to those attacks. The campaign can then spend some time preparing an effective response to any likely attacks. Thirdly, this kind of poll can be used by candidates or political parties to convince primary challengers to drop out of a race and support a stronger candidate.
    Tracking polls

    A tracking poll is a poll repeated at intervals generally averaged over a trailing window.[5] For example, a weekly tracking poll uses the data from the past week and discards older data.

    A caution is that estimating the trend is more difficult and error-prone than estimating the level – intuitively, if one estimates the change, the difference between two numbers X and Y, then one has to contend with the error in both X and Y – it is not enough to simply take the difference, as the change may be random noise. For details, see t-test. A rough guide is that if the change in measurement falls outside the margin of error, it is worth attention.
    Potential for inaccuracy

    Polls based on samples of populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. The uncertainty is often expressed as a margin of error. The margin of error is usually defined as the radius of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. Others suggest that a poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population.

    A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the 95% confidence interval of the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people.[6] In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500–1,000 is a typical compromise for political polls. (Note that to get complete responses it may be necessary to include thousands of additional participators.)[7]

    Another way to reduce the margin of error is to rely on poll averages. This makes the assumption that the procedure is similar enough between many different polls and uses the sample size of each poll to create a polling average.[8] An example of a polling average can be found here: 2008 Presidential Election polling average (http://www.daytodaypolitics.com/polls/presidential_election_Obama_vs_McCain_2008.htm). Another source of error stems from faulty demographic models by pollsters who weigh their samples by particular variables such as party identification in an election. For example, if you assume that the breakdown of the US population by party identification has not changed since the previous presidential election, you may underestimate a victory or a defeat of a particular party candidate that saw a surge or decline in its party registration relative to the previous presidential election cycle.

    Over time, a number of theories and mechanisms have been offered to explain erroneous polling results. Some of these reflect errors on the part of the pollsters; many of them are statistical in nature. Others blame the respondents for not giving candid answers (e.g., the Bradley effect, the Shy Tory Factor); these can be more controversial.
    Nonresponse bias

    Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population due to a non-response bias. Because of this selection bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes, because taking a larger sample size simply repeats the same mistake on a larger scale. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own techniques for adjusting weights to minimize selection bias.[9]
    Response bias

    Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like racism or sexism, and thus polls might not reflect the true incidence of these attitudes in the population. In American political parlance, this phenomenon is often referred to as the Bradley effect. If the results of surveys are widely publicized this effect may be magnified - a phenomenon commonly referred to as the spiral of silence.
    Wording of questions

    Disagree.




    Your best effort.
  • ApostleofGriefApostleofGrief Member Posts: 3,904
    AZDuck said:

    Opinion poll
    From Wikipedia, the free encyclopedia


    An opinion poll, sometimes simply referred to as a poll, is a survey of public opinion from a particular sample.

    Doog.

    YOU ARE A FUKKAH
  • Mad_SonMad_Son Member Posts: 10,171
    "I feel a sense of belonging when the Washington Huskies beat the Oregon Ducks in Football."

    Uh... Last time UW beat Oregon I didn't realize I needed to savor the moment... fuck, at least I was at the game.
Sign In or Register to comment.