Today in Edworking News we want to talk about Topics Sections More For IEEE Members For IEEE Members IEEE Spectrum Follow IEEE Spectrum Support IEEE Spectrum Enjoy more free content and benefits by creating an account Saving articles to read later requires an IEEE Spectrum account The Institute content is only available for members Downloading full PDF issues is exclusive for IEEE Members Downloading this e-book is exclusive for IEEE Members Access to Spectrum
1-bit LLMs Could Solve AI’s Energy Demands
Large language models (LLMs), which power chatbots like ChatGPT, are growing larger and demanding more energy and computational power. This poses challenges as these models become increasingly expensive and less environmentally friendly. For LLMs to be cheap, fast, and eco-friendly, they need to operate efficiently on smaller devices, like cell phones.
Researchers are addressing this challenge by rounding off high-precision numbers used in LLMs to either 1 or -1, dramatically reducing the model's size without significantly losing accuracy. Known as quantization, this process has evolved from using 16 bits to just 1 bit.
How to Make a 1-bit LLM
There are two main methods to achieve 1-bit LLMs:
Post-training Quantization (PTQ): This involves quantizing the parameters of a fully-trained, full-precision network.
Quantization-aware Training (QAT): This trains a network from scratch with low-precision parameters in mind.
In February, ETH Zurich, Beihang University, and the University of Hong Kong introduced BiLLM, a PTQ method. This approach approximates most parameters using 1 bit but uses 2 bits for some crucial parameters, striking a balance between performance and memory efficiency. A 13 billion-parameter version of Meta's LLaMa LLM using BiLLM required only a tenth of the memory compared to its full-precision counterpart.
1-bit LLMs vs. Larger Models
PTQ methods have specific advantages:
No need for collecting training data.
Easier and more stable training processes.
On the other hand, QAT methods can be more accurate since quantization is integrated from the beginning. Last year, researchers from Microsoft Research Asia developed BitNet, a QAT method to produce 1-bit LLMs. BitNet models showed remarkable efficiency, being approximately 10 times more energy-efficient than full-precision models.
In February, BitNet 1.58b was introduced, where parameters could equal -1, 0, or 1, effectively taking up 1.58 bits of memory per parameter. This resulted in a BitNet model with 3 billion parameters performing as well as a full-precision LLaMA model while using 72% less GPU memory and 94% less GPU energy.
Efficiency and Future Prospects
A recent preprint by Harbin Institute of Technology introduced OneBit, a method combining attributes of both PTQ and QAT. This hybrid approach yielded a 13-billion-parameter model that occupied only 10% of the memory required by traditional models, showcasing the potential for high performance on custom chips.
Architecture of BitNet and OneBit, emphasizing memory and energy savings.
Wei from Microsoft highlights that quantized models have several advantages, including fitting on smaller chips and needing less data transfer, ultimately allowing faster processing. However, the current hardware can't fully exploit these benefits as they often run on GPUs designed for higher precision operations.
Remember these 3 key ideas for your startup:
Efficiency and Custom Hardware: By adopting 1-bit LLMs, businesses can significantly reduce energy consumption and hardware costs, optimizing operations for smaller devices.
Balancing Performance and Cost: Combining PTQ and QAT through methods like BitNet and OneBit can help startups achieve high performance with minimal memory usage, enabling more scalable and sustainable applications.
Future-Proofing Operations: Startups should monitor advancements in custom hardware designed for 1-bit LLMs, ensuring they can leverage the latest technologies to maintain a competitive edge.
Edworking is the best and smartest decision for SMEs and startups to be more productive. Edworking is a FREE superapp of productivity that includes all you need for work powered by AI in the same superapp, connecting Task Management, Docs, Chat, Videocall, and File Management. Save money today by not paying for Slack, Trello, Dropbox, Zoom, and Notion.
For more details, see the original source.