In many fields of research right now, scientists collect data until they see a pattern that appears statistically significant, and then they use that tightly selected data to publish a paper. Critics have come to call this p-hacking, and the practice uses a quiver of little methodological tricks that can inflate the statistical significance of a finding. As enumerated by one research group, the tricks can include:
- â€œconducting analyses midway through experiments to decide whether to continue collecting data,â€
- â€œrecording many response variables and deciding which to report postanalysis,â€
- â€œdeciding whether to include or drop outliers postanalyses,â€
- â€œexcluding, combining, or splitting treatment groups postanalysis,â€
- â€œincluding or excluding covariates postanalysis,â€
- â€œand stopping data exploration if an analysis yields a significant p-value.â€
Add it all up, and you have a significant problem in the way our society produces knowledge.
A map is not just a pictureâ€”itâ€™s also the data behind the map, the methodology used to collect and parse that data, the people doing that work, the choices made in terms of visualization and the software used to make them. A map is also a representation of the world, which in some ways must always be a little inaccurateâ€”most maps, after all, show the roughly spherical world on a flat surface. Certain things are always left off or highlighted while others are altered, as no map can show everything at once. All of those choices and biases, conscious or not, can have important effects on the map itself. We may be looking at something inaccurate, misleading, or incorrect without realizing it.
Source: When Maps Lie