Today in Edworking News we want to talk about Why wordfreq will not be updated The wordfreq data is a snapshot of language that could be found in various online sources up through 2021. There are several reasons why it will not be updated anymore.
Generative AI has polluted the data I don't think anyone has reliable information about post-2021 language usage by humans. The open Web (via OSCAR) was one of wordfreq's data sources. Now the Web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies. Sure, there was spam in the wordfreq data sources, but it was manageable and often identifiable.
Large language models generate text that masquerades as real language with intention behind it, even though there is none, and their output crops up everywhere. As one example, Philip Shapira reports that ChatGPT (OpenAI's popular brand of generative language model circa 2024) is obsessed with the word "delve" in a way that people never have been, and caused its overall frequency to increase by an order of magnitude. Information that used to be free became expensive wordfreq is not just concerned with formal printed words. It collected more conversational language usage from two sources in particular: Twitter and Reddit. The Twitter data was always built on sand.
Even when Twitter allowed free access to a portion of their "firehose", the terms of use did not allow me to distribute that data outside of the company where I collected it (Luminoso). wordfreq has the frequencies that were built with that data as input, but the collected data didn't belong to me and I don't have it anymore. Now Twitter is gone anyway, its public APIs have shut down, and the site has been replaced with an oligarch's plaything, a spam-infested right-wing cesspool called X. Even if X made its raw data feed available (which it doesn't), there would be no valuable information to be found there. Reddit also stopped providing public data archives, and now they sell their archives at a price that only OpenAI will pay. And given what's happening to the field, I don't blame them. I don't want to be part of this scene anymore wordfreq used to be at the intersection of my interests. I was doing corpus linguistics in a way that could also benefit natural language processing tools.
The field I know as "natural language processing" is hard to find these days. It's all being devoured by generative AI. Other techniques still exist but generative AI sucks up all the air in the room and gets all the money. It's rare to see NLP research that doesn't have a dependency on closed data controlled by OpenAI and Google, two companies that I already despise. wordfreq was built by collecting a whole lot of text in a lot of languages. That used to be a pretty reasonable thing to do, and not the kind of thing someone would be likely to object to. Now, the text-slurping tools are mostly used for training generative AI, and people are quite rightly on the defensive. If someone is collecting all the text from your books, articles, Web site, or public posts, it's very likely because they are creating a plagiarism machine that will claim your words as its own. So I don't want to work on anything that could be confused with generative AI, or that could benefit generative AI. OpenAI and Google can collect their own damn data. I hope they have to pay a very high price for it, and I hope they're constantly cursing the mess that they made themselves. — Robyn Speer
Summary
The wordfreq project, a tool that provided a snapshot of language usage from various online sources up until 2021, will no longer be updated. The decision stems from several critical issues that have emerged over the past few years, primarily revolving around the influence of generative AI and the changing landscape of data accessibility.
Generative AI's Impact
One of the primary reasons for discontinuing updates to wordfreq is the pollution of data by generative AI. The web, which was a significant data source for wordfreq, is now inundated with content generated by large language models. This content, often devoid of genuine human intent or communication, skews the frequencies of words in a way that makes the data unreliable. For instance, ChatGPT has been reported to have an unnatural obsession with the word "delve," causing its frequency to spike disproportionately.
Generative AI's influence on language data.
The Cost of Data
Another significant factor is the increasing cost of data that was once freely accessible. Platforms like Twitter and Reddit were valuable sources of conversational language data for wordfreq. However, Twitter's public APIs have been shut down, and the platform has transformed into a less reliable source of information. Reddit, on the other hand, has started selling its data archives at prices only large entities like OpenAI can afford. This shift has made it impractical for smaller projects like wordfreq to continue using these sources.
The changing landscape of data accessibility.
Shifting Focus in NLP
The field of Natural Language Processing (NLP) has also undergone significant changes. The rise of generative AI has overshadowed other NLP techniques, drawing most of the attention and funding. This monopolization by companies like OpenAI and Google has made it challenging for independent projects to thrive. The tools and methods that were once used for corpus linguistics are now primarily employed to train generative AI models, often leading to ethical concerns about data usage and ownership.
The evolving landscape of Natural Language Processing.
Remember these 3 key ideas for your startup:
Data Integrity is Crucial: Ensure that the data you rely on is free from significant distortions. The rise of generative AI has shown how easily data can be polluted, affecting the reliability of your insights and decisions.
Adapt to Changing Data Accessibility: Be prepared for shifts in how data is accessed and priced. Platforms that once offered free data may start charging for it, impacting your operational costs and data strategies.
Stay Ethical and Transparent: As the landscape of NLP and AI evolves, maintain ethical standards in data collection and usage. Avoid practices that could be perceived as exploitative or invasive, and be transparent with your stakeholders about your data sources and methodologies.
Edworking is the best and smartest decision for SMEs and startups to be more productive. Edworking is a FREE superapp of productivity that includes all you need for work powered by AI in the same superapp, connecting Task Management, Docs, Chat, Videocall, and File Management. Save money today by not paying for Slack, Trello, Dropbox, Zoom, and Notion.
For more details, see the original source.