bioinformatics

4 posts

google

Using AI to identify genetic variants in tumors with DeepSomatic (opens in new tab)

DeepSomatic is an AI-powered tool developed by Google Research to identify cancer-related mutations by analyzing a tumor's genetic sequence with higher accuracy than current methods. By leveraging convolutional neural networks (CNNs), the model distinguishes between inherited genetic traits and acquired somatic variants that drive cancer progression. This flexible tool supports multiple sequencing platforms and sample types, offering a critical resource for clinicians and researchers aiming to personalize cancer treatment through precision medicine. ## Challenges in Somatic Variant Detection * Somatic variants are genetic mutations acquired after birth through environmental exposure or DNA replication errors, making them distinct from the germline variants found in every cell of a person's body. * Detecting these mutations is technically difficult because tumor samples are often heterogeneous, containing a diverse set of variants at varying frequencies. * Sequencing technologies often introduce small errors that can be difficult to distinguish from actual somatic mutations, especially when the mutation is only present in a small fraction of the sampled cells. ## CNN-Based Variant Calling Architecture * DeepSomatic employs a method pioneered by DeepVariant, which involves transforming raw genetic sequencing data into a set of multi-channel images. * These images represent various data points, including alignment along the chromosome, the quality of the sequence output, and other technical variables. * The convolutional neural network processes these images to differentiate between three categories: the human reference genome, non-cancerous germline variants, and the somatic mutations driving tumor growth. * By analyzing tumor and non-cancerous cells side-by-side, the model effectively filters out sequencing artifacts that might otherwise be misidentified as mutations. ## System Versatility and Application * The model is designed to function in multiple modes, including "tumor-normal" (comparing a biopsy to a healthy sample) and "tumor-only" mode, which is vital for blood cancers like leukemia where isolating healthy cells is difficult. * DeepSomatic is platform-agnostic, meaning it can process data from all major sequencing technologies and adapt to different types of sample processing. * The tool has demonstrated the ability to generalize its learning to various cancer types, even those not specifically included in its initial training sets. ## Open-Source Contributions to Precision Medicine * Google has made the DeepSomatic tool and the CASTLE dataset—a high-quality training and evaluation set—openly available to the global research community. * This initiative is part of a broader effort to use AI for early detection and advanced research in various cancers, including breast, lung, and gynecological cancers. * The release aims to accelerate the development of personalized treatment plans by providing a more reliable way to identify the specific genetic drivers of an individual's disease. By providing a more accurate and adaptable method for variant calling, DeepSomatic helps researchers pinpoint the specific drivers of a patient's cancer. This tool represents a significant advancement in deep learning for genomics, potentially shortening the path from biopsy to targeted therapeutic intervention.

google

Smarter nucleic acid design with NucleoBench and AdaBeam (opens in new tab)

Google Research and Move37 Labs have introduced NucleoBench, a comprehensive open-source benchmark for nucleic acid design, alongside AdaBeam, a high-performing new optimization algorithm. While AI models have become highly proficient at predicting the biological properties of DNA and RNA, generating optimal sequences within massive search spaces—such as the $2 \times 10^{120}$ possible variations for a 5' UTR—remains a significant hurdle. By standardizing evaluation across 16 distinct biological tasks, this research identifies AdaBeam as a superior method that scales effectively to the large-scale models required for modern drug discovery. ## Standardizing the Optimization Pipeline The process of computational nucleic acid design typically follows a five-step workflow: data collection, training a predictive model, generating candidate sequences (the design step), wet-lab validation, and iterative retraining. NucleoBench focuses specifically on the design step, which has historically lacked standardized evaluation. * Most existing benchmarks rely on decades-old methods like simulated annealing or vanilla genetic algorithms. * Traditional algorithms often treat predictive models as "black boxes," failing to leverage internal model data to guide the search. * The vastness of genomic search spaces makes brute-force optimization impossible, necessitating more intelligent, model-aware generation strategies. ## The NucleoBench Framework NucleoBench is the first large-scale benchmark designed to compare gradient-free and gradient-based design algorithms under identical conditions. The framework encompasses over 400,000 experiments to ensure statistical rigor across diverse biological challenges. * **Algorithm Categories**: It compares gradient-free methods (like directed evolution), which are simple but ignore model internals, against gradient-based methods (like FastSeqProp), which use the model’s internal "direction of steepest improvement" to find better sequences. * **Task Diversity**: The 16 tasks include controlling gene expression in specific cell types (liver or neuronal), maximizing transcription factor binding, and improving chromatin accessibility. * **Scale**: The benchmark includes long-range DNA sequence challenges using large-scale models like Enformer, which are computationally demanding but critical for understanding complex genomic interactions. ## AdaBeam’s Hybrid Optimization Performance Drawing on insights from the NucleoBench evaluation, the researchers developed AdaBeam, a hybrid algorithm that combines the strengths of various optimization strategies. * **Success Rate**: AdaBeam outperformed existing algorithms on 11 of the 16 tasks in the benchmark. * **Efficiency and Scaling**: Unlike many gradient-based methods that struggle with computational overhead, AdaBeam demonstrates superior scaling properties as sequences become longer and predictive models grow in complexity. * **Methodology**: It functions as a hybrid approach, using sophisticated search techniques to navigate the sequence space more effectively than "vanilla" algorithms developed before the era of deep learning. The researchers have made AdaBeam and the NucleoBench repository freely available to the scientific community. By providing a standardized environment for testing, they aim to accelerate the development of next-generation treatments, including more stable mRNA vaccines and precise CRISPR gene therapies.

google

Highly accurate genome polishing with DeepPolisher: Enhancing the foundation of genomic research (opens in new tab)

DeepPolisher is a deep learning-based genome assembly tool designed to correct base-level errors with high precision, significantly enhancing the accuracy of genomic research. By leveraging a Transformer architecture to analyze sequencing data, the tool reduces total assembly errors by 50% and insertion or deletion (indel) errors by 70%. This advancement is critical for creating near-perfect reference genomes, such as the Human Pangenome Reference, which are essential for identifying disease-causing variants and understanding human evolution. ## Limitations of Current Sequencing Technologies * Genome assembly relies on reading nucleotides (A, T, G, and C), but the microscopic scale of these base pairs makes accurate, large-scale sequencing difficult. * Short-read sequencing methods provide high signal strength but are limited to a few hundred nucleotides because identical DNA clusters eventually desynchronize, blending signals together. * Long-read technologies can sequence tens of thousands of nucleotides but initially suffered from high error rates (~10%); while tools like DeepConsensus have reduced this to 0.1%, further refinement is necessary for high-fidelity reference genomes. * Even a 0.1% error rate results in millions of inaccuracies across the 3-billion-nucleotide human genome, which can cause researchers to miss critical genetic markers or misidentify proteins. ## DeepPolisher Architecture and Training * DeepPolisher is an open-source pipeline adapted from the DeepConsensus model, utilizing a Transformer-based neural network. * The model was trained using a human cell line from the Personal Genomes Project that is estimated to be 99.99999% accurate, providing a "ground truth" for identifying and correcting errors. * The system takes sequenced bases, their associated quality scores, and the orientation of the DNA strands to learn complex error patterns that traditional methods might miss. * By combining sequence reads from multiple DNA molecules of the same individual, the tool iteratively "polishes" the assembly to reach the accuracy required for reference-grade data. ## Impact on Genomic Accuracy and Gene Discovery * The tool’s ability to reduce indel errors by 70% is particularly significant, as these specific errors often interfere with the identification of protein-coding genes. * DeepPolisher has already been integrated into major research efforts, including the enhancement of the Human Pangenome Reference, providing a more robust foundation for clinical diagnostics. * Improved assembly accuracy allows for better mapping of regions where the genome is highly repetitive, which were previously difficult to sequence and assemble confidently. For researchers and bioinformaticians, DeepPolisher represents a vital step in moving from "draft" genomes to high-fidelity references. Adopting this tool in assembly pipelines can drastically improve the reliability of variant calling and gene annotation, especially in complex clinical and evolutionary studies.

google

Teaching machines the language of biology: Scaling large language models for next-generation single-cell analysis (opens in new tab)

Cell2Sentence-Scale (C2S-Scale) is a new family of open-source large language models designed to transform complex single-cell transcriptomic data into a text-based format accessible to natural language processing. By representing gene expression profiles as "cell sentences," the framework allows researchers to use general-purpose LLM architectures to "read" and "write" biological information. This approach simplifies single-cell analysis, enabling conversational queries and automated data interpretation that were previously limited to specialized tools and expert users. ### The Cell2Sentence Mapping Method * Translates single-cell RNA sequencing (scRNA-seq) measurements into sequences of text by ordering gene names according to their expression levels. * Enables the integration of cellular data with text-based biological context, such as cell types, experimental metadata, and scientific literature. * Leverages the existing vocabulary of biology—gene names and functions—to make high-dimensional data interpretable by standard language model tokenizers. ### C2S-Scale Model Architecture and Training * Built upon Google’s Gemma open model family, maintaining the original architecture to benefit from existing scalability and infrastructure. * Trained on a dataset exceeding 1 billion tokens derived from real-world transcriptomic data and biological metadata. * Features a range of model sizes from 410 million to 27 billion parameters, allowing researchers to choose between computational efficiency for exploratory work and high performance for complex tasks. ### Functional Applications in Biology * **Conversational Querying:** Researchers can interact with data through natural language to ask specific questions, such as predicting how a T cell might respond to a particular cancer therapy. * **Automated Interpretation:** The models can generate biological summaries of experiments, describing everything from individual cell types to the characteristics of entire tissues. * **Predictive Tasks:** The framework handles diverse tasks including cell type annotation and the generation of synthetic cells or tissues for research simulations. ### Performance and Biological Scaling Laws * Research demonstrates that biological language models follow predictable scaling laws, where performance in tasks like cell type annotation improves as model size increases. * Larger models show superior gene overlap and semantic similarity scores when interpreting datasets compared to smaller versions. * Smaller models remain highly effective for parameter-efficient fine-tuning in resource-constrained environments. C2S-Scale is available as an open-source resource on GitHub and HuggingFace, offering a flexible toolkit for the research community to apply large language models to next-generation genomic discovery.