Blog

2024

Tokenization: The Red Pill to see past The Matrix

7 minute read

Tokenization is a necessary and often overlooked component in large language models. In this post, we explore the importance of tokenization and how it might very well be the key to unlocking the advanced abilities of what we expect AI to be in the future. Read more

Part 1 - The Hidden Geometry of Large Language Models: Implications on Safety & Toxicity‍

3 minute read

At Tenyx, we’ve spent countless hours peering into the intricate workings of Large Language Models (LLMs). Today, we’re excited to share our research, in collaboration with Brown University, that sheds light on the geometric structures and transformations governing these models. Our work provides new insights into how LLMs process their inputs and the implications for AI safety in applications driven by LLMs. Read more

2023

Forgetting and Toxicity in LLMs: A Deep Dive on Fine-Tuning Methods

6 minute read

Fine-tuning is a common procedure by which a pretrained language model is updated with training on a domain-specific dataset to improve performance in that domain (i.e. a chatbot to answer enterprise-specific Q&A, a hotel booking agent). It has been known for some time (if not widely appreciated) that fine-tuning a model on new data degrades its performance on the initial pretraining dataset (the dreaded “catastrophic forgetting” problem in ML). But by how much? And do all fine-tuning methods degrade performance in the same ways, and to the same extent? Read more

2021

2020

Representing data using graphs: A sparse signal approximation view

7 minute read

Graph driven machine learning has seen a surge of interest in the past few years with several applications in social sciences, biology, and network analysis, to name a few. However, in some scenarios, no graph is given a priori and one one has to infer and construct a graph to fit the data given. Read more

2017

2016