Not at all, there being a purely random selection method is about as fair and equal as you can get, where the variance comes in is what part of the invited population actually chooses to respond.
Printable View
Just going to say, the amulets are a random chance, and you have just as much of a chance to spin that on a free one as a non-free one. Amulets were chosen due to their popularity and wide acceptance. Any assumption other than that is worthy of a tin foil hat.
You two really are arguing a non point, and it should stop. I spent time on this survey to achieve two goals, to learn more about our playerbase, and to look at various factors that people are interested in or have opinions on.
Surveys happen, it's not uncommon for surveys to:
- Have random samples that are a portion of a larger population
- Offer a small reward
- Examine the data with the understanding of various variance risk factors
Eutopeus - I'm sure they would be responded to sans reward, however I'm sure our response rate would be far below 1%, and would be a fairly skewed population. Adding in a reward reduces the chances of both those items from occurring.
I don't want to close this thread, but if the sniping continues, it's "gone daddy gone."
Possibly, wouldn't be a first for me... and maybe you did, it's not a big deal. :)
Simply though, it was a straight random sample of the player population across both ages. Purchasing of any type was not a guiding selection factor. I may in the future send out a survey only for free players, or one only for paying players, but this time it was for everyone, which means across the whole invited group there is a broad representation of all the players at large.
I defined the criteria. :)
Why do it?Quote:
Could you tell us why the survey was conducted, what Evony was trying to learn, whom their target audience was (or the guiding selection factors)? (ps. You can be vague like most reply's made by a blue (not you, I actually give you credit for being the most in-depth), but maybe touching on each of the points I asked?)
I want to know more about the player base in general, and their opinions on certain questions I currently want to know more about. Surveys are an important tool for any product that is eventually offered to a customer, a survey is a useful tool in finding out who uses what you are offering and what they want. It's a far more effective and practical approach than say, reading a mature forum for weeks while taking notes.
The target audience was a representative sample of the player base.
As for results, what specifically I was looking to achieve, etc.: I can't talk about that, as these data are trade secrets.
The last time I spent money was in July of 2009. Maybe August. Not sure. It was whenever the 15-week "Premium Rewards" promotion happened. After that ended, my server was Scrooged (Christmas Eve / Day server merge) and the employees refused to address a valid promotional item overlap, so I have not spent anything since.
The problem with that viewpoint is that you have a large population of people who have more than one account on a server. Your statistical sample is polluted if the same physical person submits a survey from multiple accounts. One person, one vote. That's what the voting system in this country is supposed to be like.
If one person has more than one vote, and a response to a survey is technically a "vote", then the vote results must be either considered invalid or listed with a disclaimer noting that it was non-scientific, that people could vote more than once, and should list a margin of error. This is on a technical level, not on an emotional level. I can almost guarantee that there are insufficient checks and balances on the survey results that make the statistical sample a biased sample. A quick look at some documentation on the web shows that what I'm talking about would be considered "Selection Bias".
What I'm saying, and I don't for the life of me know why it's so difficult a concept to grasp, is that if the same person takes the survey more than once, and the survey system considers each submission as a unique individual responding, then that is an error, and the results will be skewed. Since I've taken the survey, I can guarantee that someone botting or using multiple alts will answer much differently than I did for some of the questions. If the survey gets sent to Actual Person 1 at their accounts Main 1 and Alt 1, it is all but assured that they are going to answer the survey in the same way for both accounts. This means that the statistical sample will be weighted in a biased manner, but Dave is not going to know how much to weight the results to account for the selection bias.