DevSecOps-as-a-Service on Oracle Cloud Infrastructure by Data Intensity (opens in new tab)

Data Intensity’s DevSecOps-as-a-Service provides a solution for organizations that require the granular control of GitLab Self-Managed but wish to eliminate the operational burden of infrastructure maintenance. By hosting dedicated GitLab instances on Oracle Cloud Infrastructure (OCI), the service combines the security and customization of a self-managed environment with the convenience of a fully managed platform. This partnership enables teams to focus on software delivery while leveraging expert management for high availability and disaster recovery. ### The Benefits of GitLab Self-Managed * Offers complete ownership of data residency and instance configuration to meet strict regulatory and compliance requirements. * Enables deep customization and integration possibilities that are often restricted in standard SaaS environments. * Addresses the challenges of manual server management, upgrades, and high-availability scaling by offloading these tasks to a managed provider. ### Managed Service Features and Support * Provides 24/7 monitoring, alarming, and expert technical support for standalone GitLab instances. * Includes scheduled quarterly patching performed during customer-specified maintenance windows to minimize disruption. * Ensures business continuity through automated backups and professional disaster recovery protection. * Utilizes tiered architectures designed to scale based on specific user capacities and recovery time objectives. ### Infrastructure Optimization via OCI * Delivers significant cost efficiency, with organizations typically realizing 40-50% reductions in infrastructure spending compared to other hyperscalers. * Supports diverse deployment models, including Public Cloud, Government Cloud, EU Sovereign Clouds, and dedicated infrastructure behind a corporate firewall. * Maintains consistent pricing and operational tooling across hybrid, global, and regulated environments. ### Implementation and Migration * Data Intensity offers optional migration services to transition existing code repositories and configurations to the OCI environment seamlessly. * The service is specifically designed for organizations with predictable cost requirements and those lacking in-house infrastructure expertise. * Deployment planning involves tailored consultations to match specific compliance and data residency needs with OCI’s global region availability. This managed service is a recommended path for enterprise teams that need to prioritize data sovereignty and flexibility without sacrificing the speed of a turnkey solution. Organizations currently using or planning to adopt OCI can leverage this service to standardize their DevSecOps workflows while achieving significant infrastructure savings.

From Single-Node to Multi-GPU Clusters: How Discord Made Distributed Compute Easy for ML Engineers (opens in new tab)

Discord’s machine learning infrastructure reached a critical scaling limit as models and datasets grew beyond the capacity of single-machine systems. To overcome these bottlenecks, the engineering team transitioned to a distributed compute architecture built on the Ray framework and a suite of custom orchestration tools. This evolution moved Discord from ad-hoc experimentation to a robust production platform, resulting in significant performance gains such as a 200% improvement in business metrics for Ads Ranking. ### Overcoming Hardware and Data Bottlenecks * Initial ML systems relied on simple classifiers that eventually evolved into complex models serving hundreds of millions of users. * Training requirements shifted from single-machine tasks to workloads requiring multiple GPUs. * Datasets expanded to the point where they could no longer fit on individual machines, creating a need for distributed storage and processing. * Infrastructure growth struggled to keep pace with the exponential increase in computational demands. ### Building a Ray-Based ML Platform * The Ray framework was adopted as the foundation for distributed computing to simplify complex ML workflows. * Discord integrated Dagster with KubeRay to manage the orchestration of production-grade machine learning pipelines. * Custom CLI tooling was developed to lower the barrier to entry for engineers, focusing heavily on developer experience. * A specialized observability layer called X-Ray was implemented to provide deep insights into distributed system performance. By prioritizing developer experience and creating accessible abstractions over raw compute power, Discord successfully industrialized its ML operations. For organizations facing similar scaling hurdles, the focus should be on building a unified platform that turns the complexity of distributed systems into a seamless tool for modelers.

Introducing GitLab Credits (opens in new tab)

GitLab is transitioning from seat-based pricing to a usage-based model with the introduction of GitLab Credits, a virtual currency designed for the GitLab Duo Agent Platform. This shift addresses the limitations of traditional licensing, which often creates "AI haves and have-nots" by making access too expensive for light or occasional users. By pooling resources across an entire organization, GitLab aims to provide equitable access to agentic AI for every developer while ensuring costs align with actual consumption. ## The Shift from Seat-Based to Usage-Based AI * Traditional seat-based models are poorly suited for agentic AI, which can be triggered by background SDLC events rather than just direct user interaction. * The credit model allows every member of a Premium or Ultimate organization to use AI capabilities without requiring an individual "AI seat." * Usage-based pricing automatically offsets the costs of power users against lighter users, lowering the total cost of ownership for the organization. ## Mechanics of GitLab Credits * Credits function as a pooled resource consumed by both synchronous interactions (like Agentic Chat in the IDE) and asynchronous background tasks. * Supported capabilities include foundational agents (Security, Planner, Data Analyst) and specific workflows such as Code Review and CI/CD pipeline fixing. * The system integrates with external models like Anthropic Claude Code and OpenAI Codex, as well as custom agents published in the GitLab AI Catalog. * Each credit has an on-demand list price of $1, with volume discounts available for enterprise customers who sign up for annual commitments. ## Governance and Usage Controls * Administrators can monitor consumption through two dedicated dashboards: a financial oversight portal for billing managers and an operational monitoring view for administrators. * Granular controls allow organizations to enable or disable Duo Agent Platform access for specific teams or projects to prevent unexpected credit depletion. * Proactive email alerts are triggered when consumption reaches 50%, 80%, and 100% of committed monthly credits. * A sizing calculator is available to help organizations estimate their monthly credit requirements based on patterns observed during the platform's beta period. ## Transitioning and Promotional Access * Existing GitLab Duo Pro and Duo Enterprise customers can roll over their current seat investments into GitLab Credits with volume-based discounts. * As part of a limited-time promotion, GitLab is providing $12 in monthly credits per user for Premium subscribers and $24 per user for Ultimate subscribers. * Self-managed and GitLab Dedicated customers will gain access to these credit-based features starting with the 18.8 and 18.9 releases. For organizations looking to scale AI across the software development lifecycle, the credit-based model offers a more flexible and cost-effective path than rigid seat licenses. Current Premium and Ultimate subscribers should leverage their monthly promotional credits to baseline their usage before committing to larger annual credit bundles.

GitLab extends Omnibus package signing key expiration to 2028 (opens in new tab)

GitLab has extended the expiration of its GNU Privacy Guard (GPG) key used for signing Omnibus packages from February 2026 to February 16, 2028. This extension ensures the continued integrity of packages created within CI pipelines while remaining compliant with GitLab’s internal security policies regarding key exposure. By opting to extend the current key rather than rotating to a new one, GitLab aims to minimize administrative overhead for users who would otherwise be required to replace their trusted keys. ### Purpose and Scope of the Key Extension * The GPG key is specifically dedicated to signing Omnibus packages to prevent tampering; it is distinct from the keys used for repository metadata (apt/yum) and the GitLab Runner. * GitLab periodically extends the expiration of these keys to limit the potential impact of a compromise while adhering to modern security standards. * The decision to extend rather than rotate was made specifically to be less disruptive to the user base, as rotation mandates a manual replacement of the trusted key on all client systems. ### Impact and Required Actions * Users who do not specifically verify package signatures or have not configured their package managers to do so require no action to continue installing updates. * Administrators who validate Omnibus package signatures must update their local copies of the public key to reflect the 2028 expiration date. * The updated key can be found on GPG keyservers by searching for the ID `98BF DB87 FCF1 0076 416C 1E0B AD99 7ACC 82DD 593D` or the email `packages@gitlab.com`. * A direct download of the public key is also available through the official GitLab packages repository URL. Organizations that verify package signatures should refresh their trusted GPG keys as soon as possible to ensure seamless updates leading up to the original 2026 deadline. If technical issues arise during the update process, GitLab recommends opening an issue in the omnibus-gitlab tracker for support.

GitLab backs 99.9% availability SLA with service credits (opens in new tab)

GitLab has introduced a 99.9% availability service-level agreement (SLA) specifically for Ultimate customers on GitLab.com and GitLab Dedicated. This commitment is backed by service credits to ensure that mission-critical DevSecOps workflows remain uninterrupted and to align GitLab's interests with customer business outcomes. By formalizing this uptime guarantee, GitLab aims to provide a reliable foundation for high-velocity teams that depend on continuous code pushes and automated deployments. ## Scope of Covered Services The SLA covers the core platform experiences essential to daily software delivery workflows: * Issues and merge requests management. * Git operations, including push, pull, and clone actions via both HTTPS and SSH protocols. * Operations within the Container Registry and Package Registry. * API requests associated with the aforementioned core services. ## Defining and Measuring Downtime Service availability is tracked via automated monitoring across multiple geographic locations to reflect actual user experience. * A "downtime minute" is triggered when 5% or more of valid customer requests result in server errors. * Server errors are strictly defined as HTTP 5xx status codes or connection timeouts exceeding 30 seconds. * While monitoring focuses on server-side failures, GitLab will also holistically review claims for issues that might not trigger 5xx errors, such as Sidekiq job processing outages or specific application bugs. ## Service Credit Claim Procedure To maintain accountability, GitLab has established a formal process for Ultimate customers to recoup costs during outages: * Customers must submit a support request at support.gitlab.com within 30 days of the end of the month in which the downtime occurred. * The GitLab team validates the claim against internal and external monitoring data. * Validated service credits are applied directly to the customer's next issued invoice, with the credit amount scaled based on the severity of the availability shortfall. Ultimate customers should familiarize their operations teams with these specific performance thresholds and the 30-day claim window to ensure they are adequately compensated during significant service disruptions.

What’s new in Git 2.53.0? (opens in new tab)

Git 2.53.0 introduces significant performance and maintenance improvements, specifically targeting large repositories and complex history rewriting workflows. Key updates include compatibility between geometric repacking and partial clones, as well as more granular control over commit signatures during imports. These enhancements collectively move Git toward more efficient repository management and better data integrity for modern development environments. ## Geometric Repacking Support with Promisor Remotes * Git utilizes repacking to consolidate loose objects into packfiles, with the "geometric" strategy maintaining a size-based progression to minimize the computational overhead found in "all-into-one" repacks. * Previously, geometric repacking was incompatible with partial clones because it could not correctly identify or manage "promisor" packfiles, which contain the metadata for objects expected to be backfilled from a remote. * The 2.53.0 release enables geometric repacking to process promisor packfiles separately, preserving the promisor marker and preventing the tool from crashing when used within a partial clone repository. * This fix removes a major blocker for making the geometric strategy the default repacking method for all Git repositories. ## Preserving Valid Signatures in git-fast-import(1) * The `git-fast-import` tool, a backend for high-volume data ingestion and history rewriting, previously lacked the nuance to handle commit signatures during partial repository edits. * A new `strip-if-invalid` mode has been added to the `--signed-commits` option to solve the "all-or-nothing" problem where users had to choose between keeping broken signatures or stripping valid ones. * This feature allows Git to automatically detect which signatures remain valid after a rewrite and only strip those that no longer match their modified commits. * This provides a foundation for tools like `git-filter-repo` to preserve the chain of trust for unchanged commits during migration or cleaning operations. ## Expanded Data in git-repo-structure * The `structure` subcommand of `git-repo`, intended as a native alternative to the `git-sizer` utility, now provides deeper insights into repository scaling. * The command now reports the total inflated size and actual disk size of all reachable objects, categorized by type: commits, trees, blobs, and tags. * These metrics are essential for administrators managing massive repositories, as they help identify which object types are driving disk consumption and impacting performance. These updates reflect Git’s continued focus on scalability and developer experience, particularly for organizations managing massive codebases. Users of partial clones and repository migration tools should consider upgrading to 2.53.0 to leverage the improved repacking logic and more sophisticated signature handling.