Katie Rawson and Trevor Munoz, the authors of this article, clearly made “data cleaning” the core topic. Specifically, the downsides to data cleaning. The main concerns that were brought up were the loss of validity and the reductiveness that can be caused by data cleaning. Without the complexity of data, which is lost in data cleaning, research studies become less accurate to real life, therefore less significant to the issue at hand. In the end, the authors propose a new approach to understanding complex data. This would consist of building systems that can explore the “strange” results instead of eliminating them completely. 

This seems to be a debate that is not exactly new to research fields. I know this because the author’s purpose was not to completely get rid of data cleaning, but to change how it is done. A change in the way data cleaning is conducted was first suggested whenever the authors mentioned how harmful the current practice is. The only main alteration that is brought up is the creation of a new system that values the different and unique qualities of data. This is not a new revelation, since data cleaning has been a valuable part of research for years. However, the article still holds a significant argument, proving that data cleaning is a piece of an old system that is in dire need of an update. This is all done without entirely bashing data cleaning as a practice. 

While I may not know much about data cleaning and the arguments against it, I know this is a controversial topic, since data cleaning has long been used to improve data quality after a study is conducted. Fortunately, the way this article was written gets the point across while not attempting to take data cleaning off the list of research practices. Now, new researchers who read this article can understand the downfalls of data cleaning without ignoring its full benefits.