Mastering Tokenization: Key to Successful AI Applications

BY Mark Howell 29 days ago7 MINS READ
article cover

Last week, I was helping a friend of mine to get one of his new apps off the ground. I can’t speak much about it at the moment, other than like most apps nowadays, it has some AI sprinkled over it. Ok, maybe a bit more than just a bit – depends on the way you look at it, I suppose. There is a Retrieval-augmented generation (RAG) hiding somewhere in most of the AI apps. RAG is still all the RAGe – it even has its own Wikipedia page now! I’m not sure if anyone is tracking how fast a term reaches the point where it gets its own Wiki page, but RAG must be somewhere near the top of the charts. I find it quite intriguing that most of the successful AI apps are basically clever semantic search apps. Google search got somewhat unbundled at last, which kind of makes me think whether their not unleashing all the LLM tech way earlier was behind all of this. But I digress. The app my friend has been building for the past couple of weeks deals with a lot of e-commerce data: descriptions of different items, invoices, reviews, etc. The problem he was facing was that the RAG wasn’t working particularly well for some queries, while it worked very well for others.

Understanding Tokenization in AI Apps

One of the things I noticed over the past year is how a lot of developers who are used to developing in the traditional (deterministic) space fail to change the way they should think about problems in the statistical space which is ultimately what LLM apps are. Statistics is more “chaotic” and abides by different rules than the “traditional” computer science algorithms. Look, I get it, it’s still maths, but it’s often a very different kind of maths. What I usually see is folks thinking about LLMs as tools you can feed anything and get back gold, but in reality, when you do that, you usually encounter “Garbage In, Garbage Out” reality. Which was almost the case in the curious case the friend of mine was dealing with.

Image: Visual representation of the tokenization process.
The first thing I do when dealing with these types of problems is getting myself familiar with the input data. You need to understand those before you can do anything meaningful with it. In this case, the input data was both the raw text that was indexed and stored in the vector databases as well as the user queries used in the retrieval. Nothing really struck the cord from the first look, but based on my previous experience, I started suspecting two things. Actually, more than two, but more on that later: Chunking is more or less a fixable problem with some clever techniques: these are pretty well documented around the internet; besides, chunking is a moving target and will only get you so far if your text tokens are garbage.

The Importance of Tokenization

In this post, I want to focus on tokenization because I feel like it’s one of those things that is somewhat understood from a high-level point of view, but the deeper you dig in, the more gaps in your knowledge you will discover and from my experience it’s often those gaps that often make or break AI apps. I’m hoping this post will demonstrate some practical examples that will convince you why you should pay attention to tokenizers.
Tokenization is the process during which a piece of text is broken down into smaller pieces, tokens, by a tokenizer. These tokens are then assigned integer values (a.k.a. token IDs) which uniquely identify the tokens within the tokenizer vocabulary. The tokenizer vocabulary is a set of all possible tokens used in the tokenizer training: yes, the tokenizers are trained (I feel the term is a bit overloaded because the tokenizer training is different from neural network training). You can train your own tokenizer and restrict its token space by various parameters, including the size of the vocabulary.

Now, if you started asking yourself what happens if any of the tokens in your text do not exist in the tokenizer’s vocabulary of the LLM you are trying to use, then you probably understand where this is headed now: usually a world of pain and hurt for many. Do not panic! A lot of the large LLM vocabularies are pretty huge (30k-300k tokens large). There are different types of tokenizers used by different LLMs, and it’s usually a good idea to be aware of which one is used by the LLM you are trying to use in your app.

Embeddings and Their Role

Tokenizers on their own are kinda….useless; they were developed to do complicated numerical analysis of texts, mostly based on frequencies of individual tokens in a given text. What we need is context. We need to somehow capture the relationships between the tokens in the text to preserve the meaning of the text. There is a better tool for preserving contextual meaning in the text: embeddings i.e. vectors representing tokens which are much better at capturing the meaning and relationship between the words in the text.
Embeddings are a byproduct of transformer training and are actually trained on the heaps of tokenized texts. It gets better: embeddings are what is actually fed as the input to LLMs when we ask it to generate text. The LLMs consist of two main components: encoder and decoder. Both the encoder and decoder accept embeddings as their input. Furthermore, the output of the encoder is also embeddings which are then passed into the decoder’s cross-attention head which plays a fundamental role in generating (predicting) tokens in the decoder’s output.

Image: Diagram of a typical transformer architecture.
Here’s what a transformer architecture looks like: So in your RAG pipeline, your text is first tokenized, then embedded, then fed into the transformer where the attention does its magic to make things work well. Earlier I said the token IDs are essentially indexes into the tokenizer vocabulary. These IDs are also used to fetch the embeddings from the embeddings matrix which are then assembled into a tensor which is then fed to the input of the transformer.

Conclusion

I hope this post gave you a better idea about how tokenizers may influence your RAG apps and why you should pay at least some attention to them. More importantly, I hope you now understand that garbage-in garbage-out will not always pay the dividends you might expect in your agentic applications. A little bit of cleaning of input text (you noticed the effect some empty space characters had on embeddings) might go a long way: standardize the format your dates so they’re consistent throughout your embeddings; remove trailing spaces wherever you can - you saw the effect they had on the embeddings; the same goes for any other numerical data like prices in different currencies, etc.
Remember these 3 key ideas for your startup:

  1. Understand the Importance of Tokenization: Tokenization is crucial for AI apps, especially those using LLMs. It breaks down text into manageable tokens, which are then used for analysis and processing. Understanding how tokenization works can help improve the accuracy and efficiency of your AI applications. For more insights on managing AI projects, check out this quick guide to strategy portfolio management.

  2. Leverage Embeddings for Contextual Understanding: Embeddings capture the relationships between tokens, preserving the meaning of the text. They are essential for providing context in AI models, ensuring that the generated outputs are meaningful and relevant. Learn more about how to effectively assign tasks to team members to improve your team's productivity.

  3. Optimize Your Input Data: Clean and standardize your input data to avoid the "Garbage In, Garbage Out" scenario. Pay attention to details like date formats, currency symbols, and spacing to ensure your AI models perform optimally.


Edworking is the best and smartest decision for SMEs and startups to be more productive. Edworking is a FREE superapp of productivity that includes all you need for work powered by AI in the same superapp, connecting Task Management, Docs, Chat, Videocall, and File Management. Save money today by not paying for Slack, Trello, Dropbox, Zoom, and Notion. For more details, see the original source.

article cover
About the Author: Mark Howell Linkedin

Mark Howell is a talented content writer for Edworking's blog, consistently producing high-quality articles on a daily basis. As a Sales Representative, he brings a unique perspective to his writing, providing valuable insights and actionable advice for readers in the education industry. With a keen eye for detail and a passion for sharing knowledge, Mark is an indispensable member of the Edworking team. His expertise in task management ensures that he is always on top of his assignments and meets strict deadlines. Furthermore, Mark's skills in project management enable him to collaborate effectively with colleagues, contributing to the team's overall success and growth. As a reliable and diligent professional, Mark Howell continues to elevate Edworking's blog and brand with his well-researched and engaging content.

Trendy NewsSee All Articles
CoverVisual Prompt Injections: Essential Guide for StartupsThe Beginner's Guide to Visual Prompt Injections explores vulnerabilities in AI models like GPT-4V, highlighting security risks for startups and offering strategies to mitigate potential data compromises.
BY Mark Howell 8 days ago
CoverGraph-Based AI: Pioneering Future Innovation PathwaysGraph-based AI, developed by MIT's Markus J. Buehler, bridges unrelated fields, revealing shared complexity patterns, accelerating innovation by uncovering novel ideas and designs, fostering unprecedented growth opportunities.
BY Mark Howell 8 days ago
CoverRevolutionary Image Protection: Watermark Anything with Localized MessagesWatermark Anything enables embedding multiple localized watermarks in images, balancing imperceptibility and robustness. It uses Python, PyTorch, and CUDA, with COCO dataset, under CC-BY-NC license.
BY Mark Howell 8 days ago
CoverJungle Music's Role in Shaping 90s Video Game SoundtracksJungle music in the 90s revolutionized video game soundtracks, enhancing fast-paced gameplay on PlayStation and Nintendo 64, and fostering a cultural revolution through its energetic beats and immersive experiences.
BY Mark Howell 8 days ago
CoverMastering Probability-Generating Functions: A Guide for EntrepreneursProbability-generating functions (pgfs) are mathematical tools used in probability theory for data analysis, risk management, and predictive modeling, crucial for startups and SMEs in strategic decision-making.
BY Mark Howell 21 days ago
CoverReviving Connection: What We Lost with the Decline of Letter WritingThe shift from handwritten letters to digital communication has reduced personal connection, depth, and attentiveness, impacting how we communicate and relate in both personal and business contexts.
BY Mark Howell 29 days ago
CoverLichess Move: Behind-the-Scenes Technical BreakdownWhen you make a move on lichess.org, it triggers real-time data exchanges via WebSocket, updates game state, and ensures seamless gameplay using Redis Pub/Sub and MongoDB.
BY Mark Howell 29 days ago
CoverExploring PlayStation Vita's Architecture: A Deep Dive (Part 1)The PlayStation Vita, released in 2011, exemplifies strategic tech adoption, balancing innovation and market positioning, offering insights for startups and SMEs in competitive tech markets.
BY Mark Howell 29 days ago
Try EdworkingA new way to work from  anywhere, for everyone for Free!
Sign up Now