About xLSTM
xLSTM is a cutting-edge Recurrent Neural Network (RNN) architecture inspired by the original Long Short-Term Memory (LSTM), but enhanced with new mechanisms for improved performance. By incorporating Exponential Gating, advanced normalization techniques, and a Matrix Memory, xLSTM addresses the inherent limitations of traditional LSTM models and demonstrates notable improvements in Language Modeling tasks compared to Transformers or State Space Models.
Installation Guide
Minimal Installation
Create a conda environment using the `environment_pt220cu121.yaml` file.
Install the xLSTM module as a package:
- Via pip.
- Cloning from GitHub.
Requirements
This package relies on PyTorch (tested for versions >=1.8). For the CUDA version of xLSTM, ensure your system has a Compute Capability >= 8.0. Refer to the CUDA GPUs for compatibility details. For a robust setup, it's advised to use the provided `environment_pt220cu121.yaml`.
Usage
For non-language applications or integration into other architectures, use `xLSTMBlockStack`.
For language modeling or token-based applications, employ `xLSTMLMModel`.
xLSTM Block Stack
Ideal for use as an alternative backbone in existing projects, akin to a stack of Transformer blocks but based on xLSTM blocks. You can configure it using yaml strings/files and employ tools like dacite for creating config dataclasses.

The xLSTMBlockStack configured for integration.
xLSTM Language Model
The `xLSTMLMModel` is essentially a wrapper around the `xLSTMBlockStack`, incorporating token embedding and language model heads, making it specifically tailored for language-based tasks.
Experiments
To assess the potential of xLSTM over the traditional LSTM variants, synthetic experiments are conducted. For instance:
Parity Task: Demonstrates state-tracking capabilities provided by sLSTM.
Multi-Query Associative Recall Task: Highlights the matrix-memory and state expansion benefits of mLSTM.
These experiments validate the significant advancements provided by the xLSTM in various challenging tasks.
Example command to run experiments:
```bash
python main.py --config experiments/config.yaml
Note: The provided training loop does not include early stopping or test evaluations.
If you find this implementation useful or plan to use it in your projects, kindly cite the official xLSTM paper: xLSTM Paper.
Edworking is the best and smartest decision for SMEs and startups to be more productive. Edworking is a FREE superapp of productivity that includes all you need for work powered by AI in the same superapp, connecting Task Management, Docs, Chat, Videocall, and File Management. Save money today by not paying for Slack, Trello, Dropbox, Zoom, and Notion.
Remember these 3 key ideas for your startup:
Enhance Efficiency with Advanced RNNs: Leveraging xLSTM can significantly optimize your language modeling and state-tracking tasks, ensuring better performance than traditional LSTM or Transformer models.
Seamless Integration: The `xLSTMBlockStack` allows for smooth integration as an alternative backbone in existing projects, making the transition to using advanced neural network architectures highly efficient.
Tailored to Varied Applications: Whether your needs are in non-language or language modeling applications, xLSTM offers specialized tools (`xLSTMBlockStack` and `xLSTMLMModel`) tailored to meet diverse operational requirements effectively.
Deploying xLSTM in your projects can revolutionize your approach to machine learning and artificial intelligence, resulting in more robust and quickly adaptable solutions.
For more details, see the original source.