UNPOPULAR OPINION. 99% of my feed is lashing out at this post, but hear me out. The ACTUAL paper is presented as an attempt to replicate known concepts with large-scale quantitative methods. Indeed, the paper addresses its limitations vs. historical linguistics research! 1/8 https://twitter.com/Princeton/status/1296779082663964673">https://twitter.com/Princeton...
The article does not claim Groundbreaking Novelty Whoa This Has Never Been Done Before(TM). The tweet and the @physorg_com article do. @kennysmithed here explains it perfectly. 2/8 https://twitter.com/kennysmithed/status/1297496082092613637?s=20">https://twitter.com/kennysmit...
What went wrong? @physorg_com should not have claimed that this is the "first large-scale, data-driven study" in semantic alignment. It is the first APPLICATION of #MachineLearning to this field. Is it supremely cool? YES. Is it groundbreaking? NO. 3/8
Now, there are a few takeaways we can learn from this mess. 4/8
Takeaway no. 1: #scicomm MUST DO BETTER. It must disengage from the obsession with novelty. Progress is incremental, based on interdisciplinary cooperation, and rarely fueled by random eurekas. It& #39;s complex. Guess what? We, the readers, can take complex. We LOVE it. 5/8
Takeaway no. 2: Humanists must make the effort to read scientific papers instead of publicity. We preach the importance of primary sources, don& #39;t we? It is our duty, more than anyone else& #39;s, to make that extra step. And to direct criticism precisely where it is needed. 6/8
Takeaway no. 3: We all need to stop fostering toxic dynamics of us vs. them. Quantitative & qualitative can coexist. The future is interdisciplinary. We must all work together to make such projects happen and keep learning from & with each other. 7/8
Here& #39;s the actual paper, published by @NatureHumBehav https://www.nature.com/articles/s41562-020-0924-8">https://www.nature.com/articles/... 8/8