sequence-modeling

2 posts

google

Titans + MIRAS: Helping AI have long-term memory (opens in new tab)

Google Research has introduced Titans, a new architecture, and MIRAS, a theoretical framework, designed to overcome the computational limitations of Transformers while maintaining high-fidelity long-term memory. These innovations utilize "test-time memorization," allowing models to update their core parameters in real-time as they process data without requiring offline retraining. By combining the speed of linear recurrent neural networks (RNNs) with the accuracy of attention mechanisms, the system enables AI to handle massive contexts such as genomic analysis or full-document understanding. ## Titans and Neural Long-Term Memory * Unlike traditional RNNs that compress context into fixed-size vectors or matrices, Titans uses a multi-layer perceptron (MLP) as a dedicated long-term memory module. * This deep neural memory provides significantly higher expressive power, allowing the model to synthesize and understand entire narratives rather than just storing passive snapshots. * The architecture separates memory into two distinct modules: an attention mechanism for precise short-term context and the MLP for summarizing long-term information. ## The Gradient-Based Surprise Metric * Titans employs a "surprise metric" to decide which information is important enough to store, mirroring the human brain's tendency to remember unexpected events. * The model calculates an internal error signal (gradient); a high gradient indicates that the new input is anomalous or context-breaking, signaling it should be prioritized for long-term storage. * The system incorporates "Momentum" to track the flow of context over time, ensuring that subsequent relevant information is captured even if individual tokens are not surprising. * To manage memory capacity during extremely long sequences, an adaptive weight decay mechanism acts as a forgetting gate to discard information that is no longer useful. ## MIRAS: A Unified Framework for Sequence Modeling * MIRAS provides a theoretical blueprint that views all major sequence models—including Transformers and linear RNNs—as different forms of associative memory modules. * The framework defines sequence models through four key design choices: memory architecture (e.g., MLP vs. vector), attentional bias, and the internal learning objectives used to combine new and old data. * This approach shifts AI modeling toward real-time adaptation, where the model actively learns and incorporates specific new details into its core knowledge as data streams in. These advancements suggest a shift away from static context windows toward dynamic systems capable of lifelong learning. For developers working with large-scale data, the Titans architecture provides a practical tool for scaling performance, while the MIRAS framework offers a roadmap for designing next-generation models that adapt instantly to new information.

google

Smarter nucleic acid design with NucleoBench and AdaBeam (opens in new tab)

Google Research and Move37 Labs have introduced NucleoBench, a comprehensive open-source benchmark for nucleic acid design, alongside AdaBeam, a high-performing new optimization algorithm. While AI models have become highly proficient at predicting the biological properties of DNA and RNA, generating optimal sequences within massive search spaces—such as the $2 \times 10^{120}$ possible variations for a 5' UTR—remains a significant hurdle. By standardizing evaluation across 16 distinct biological tasks, this research identifies AdaBeam as a superior method that scales effectively to the large-scale models required for modern drug discovery. ## Standardizing the Optimization Pipeline The process of computational nucleic acid design typically follows a five-step workflow: data collection, training a predictive model, generating candidate sequences (the design step), wet-lab validation, and iterative retraining. NucleoBench focuses specifically on the design step, which has historically lacked standardized evaluation. * Most existing benchmarks rely on decades-old methods like simulated annealing or vanilla genetic algorithms. * Traditional algorithms often treat predictive models as "black boxes," failing to leverage internal model data to guide the search. * The vastness of genomic search spaces makes brute-force optimization impossible, necessitating more intelligent, model-aware generation strategies. ## The NucleoBench Framework NucleoBench is the first large-scale benchmark designed to compare gradient-free and gradient-based design algorithms under identical conditions. The framework encompasses over 400,000 experiments to ensure statistical rigor across diverse biological challenges. * **Algorithm Categories**: It compares gradient-free methods (like directed evolution), which are simple but ignore model internals, against gradient-based methods (like FastSeqProp), which use the model’s internal "direction of steepest improvement" to find better sequences. * **Task Diversity**: The 16 tasks include controlling gene expression in specific cell types (liver or neuronal), maximizing transcription factor binding, and improving chromatin accessibility. * **Scale**: The benchmark includes long-range DNA sequence challenges using large-scale models like Enformer, which are computationally demanding but critical for understanding complex genomic interactions. ## AdaBeam’s Hybrid Optimization Performance Drawing on insights from the NucleoBench evaluation, the researchers developed AdaBeam, a hybrid algorithm that combines the strengths of various optimization strategies. * **Success Rate**: AdaBeam outperformed existing algorithms on 11 of the 16 tasks in the benchmark. * **Efficiency and Scaling**: Unlike many gradient-based methods that struggle with computational overhead, AdaBeam demonstrates superior scaling properties as sequences become longer and predictive models grow in complexity. * **Methodology**: It functions as a hybrid approach, using sophisticated search techniques to navigate the sequence space more effectively than "vanilla" algorithms developed before the era of deep learning. The researchers have made AdaBeam and the NucleoBench repository freely available to the scientific community. By providing a standardized environment for testing, they aim to accelerate the development of next-generation treatments, including more stable mRNA vaccines and precise CRISPR gene therapies.