AWS

22 posts

aws.amazon.com/blogs/aws

Filter by tag

aws

Announcing Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs | AWS News Blog (opens in new tab)

Amazon has announced the general availability of EC2 G7e instances, a new hardware tier powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs designed for generative AI and high-end graphics. These instances deliver up to 2.3 times the inference performance of their G6e predecessors while providing significant upgrades to memory and bandwidth. This launch aims to provide a cost-effective solution for running medium-sized AI models and complex spatial computing workloads at scale. **Blackwell GPU and Memory Advancements** * The G7e instances feature NVIDIA RTX PRO 6000 Blackwell GPUs, which provide twice the memory and 1.85 times the memory bandwidth of the G6e generation. * Each GPU provides 96 GB of memory, allowing users to run medium-sized models—such as those with up to 70 billion parameters—on a single GPU using FP8 precision. * The architecture is optimized for both spatial computing and scientific workloads, offering the highest graphics performance currently available in the EC2 portfolio. **High-Speed Connectivity and Multi-GPU Scaling** * To support large-scale models, G7e instances utilize NVIDIA GPUDirect P2P, enabling direct communication between GPUs over PCIe interconnects with minimal latency. * These instances offer four times the inter-GPU bandwidth compared to the L40s GPUs found in G6e instances, facilitating more efficient data transfer in multi-GPU configurations. * Total GPU memory can scale up to 768 GB within a single node, supporting massive inference tasks across eight interconnected GPUs. **Networking and Storage Performance** * G7e instances provide up to 1,600 Gbps of network bandwidth, a four-fold increase over previous generations, making them suitable for small-scale multi-node clusters. * Support for NVIDIA GPUDirect Remote Direct Memory Access (RDMA) via Elastic Fabric Adapter (EFA) reduces latency for remote GPU-to-GPU communication. * The instances support GPUDirect Storage with Amazon FSx for Lustre, achieving throughput speeds up to 1.2 Tbps to ensure rapid model loading and data processing. **System Specifications and Configurations** * Under the hood, G7e instances are powered by Intel Emerald Rapids processors and support up to 192 vCPUs and 2,048 GiB of system memory. * Local storage options include up to 15.2 TB of NVMe SSD capacity to handle high-speed data caching and local processing. * The instance family ranges from the g7e.2xlarge (1 GPU, 8 vCPUs) to the g7e.48xlarge (8 GPUs, 192 vCPUs). For developers ready to transition to Blackwell-based architecture, these instances are accessible through AWS Deep Learning AMIs (DLAMI). They represent a major step forward for organizations needing to balance the high memory requirements of modern LLMs with the cost efficiencies of the G-series instance family.

aws

AWS Weekly Roundup: Kiro CLI latest features, AWS European Sovereign Cloud, EC2 X8i instances, and more (January 19, 2026) | AWS News Blog (opens in new tab)

The January 19, 2026, AWS Weekly Roundup highlights significant advancements in sovereign cloud infrastructure and the general availability of high-performance, memory-optimized compute instances. The update also emphasizes the maturing ecosystem of AI agents, focusing on enhanced developer tooling and streamlined deployment workflows for agentic applications. These releases collectively aim to satisfy stringent regulatory requirements in Europe while pushing the boundaries of enterprise performance and automated productivity. ## Developer Tooling and Kiro CLI Enhancements * New granular controls for web fetch URLs allow developers to use allowlists and blocklists to strictly govern which external resources an agent can access. * The update introduces custom keyboard shortcuts to facilitate seamless switching between multiple specialized agents within a single session. * Enhanced diff views provide clearer visibility into changes, improving the debugging and auditing process for automated workflows. ## AWS European Sovereign Cloud General Availability * Following its initial 2023 announcement, this independent cloud infrastructure is now generally available to all customers. * The environment is purpose-built to meet the most rigorous sovereignty and data residency requirements for European organizations. * It offers a comprehensive set of AWS services within a framework that ensures operational independence and localized data handling. ## High-Performance Computing with EC2 X8i Instances * The memory-optimized X8i instances, powered by custom Intel Xeon 6 processors, have moved from preview to general availability. * These instances feature a sustained all-core turbo frequency of 3.9 GHz, which is currently exclusive to the AWS platform. * The hardware is SAP certified and engineered to provide the highest memory bandwidth and performance for memory-intensive enterprise workloads compared to other Intel-based cloud offerings. ## Agentic AI and Productivity Updates * Amazon Quick Suite continues to expand as a workplace "agentic teammate," designed to synthesize research and execute actions based on organizational insights. * New technical guidance has been released regarding the deployment of AI agents on Amazon Bedrock AgentCore. * The integration of GitHub Actions is now supported to automate the deployment and lifecycle management of these AI agents, bridging the gap between traditional DevOps and agentic AI development. These updates signal a strategic shift toward highly specialized infrastructure, both in terms of regulatory compliance with the Sovereign Cloud and raw performance with the X8i instances. Organizations looking to scale their AI operations should prioritize the new deployment patterns for Bedrock AgentCore to ensure a robust CI/CD pipeline for their autonomous agents.

aws

Amazon EC2 X8i instances powered by custom Intel Xeon 6 processors are generally available for memory-intensive workloads | AWS News Blog (opens in new tab)

Amazon has announced the general availability of EC2 X8i instances, specifically engineered for memory-intensive workloads such as SAP HANA, large-scale databases, and data analytics. Powered by custom Intel Xeon 6 processors with a 3.9 GHz all-core turbo frequency, these instances provide a significant performance leap over the previous X2i generation. By offering up to 6 TB of memory and substantial improvements in throughput, X8i instances represent the highest-performing Intel-based memory-optimized option in the AWS cloud. ### Performance Enhancements and Processor Architecture * **Custom Silicon:** The instances utilize custom Intel Xeon 6 processors available exclusively on AWS, delivering the fastest memory bandwidth among comparable Intel cloud processors. * **Memory and Bandwidth:** X8i provides 1.5 times more memory capacity (up to 6 TB) and 3.4 times more memory bandwidth compared to previous-generation X2i instances. * **Workload Benchmarks:** Real-world performance gains include a 50% increase in SAP Application Performance Standard (SAPS), 47% faster PostgreSQL performance, 88% faster Memcached performance, and a 46% boost in AI inference. ### Scalable Instance Sizes and Throughput * **Flexible Sizing:** The instances are available in 14 sizes, including new larger formats such as the 48xlarge, 64xlarge, and 96xlarge. * **Bare Metal Options:** Two bare metal sizes (metal-48xl and metal-96xl) are available for workloads requiring direct access to physical hardware resources. * **Networking and Storage:** The architecture supports up to 100 Gbps of network bandwidth with Elastic Fabric Adapter (EFA) support and up to 80 Gbps of Amazon EBS throughput. * **Bandwidth Control:** Support for Instance Bandwidth Configuration (IBC) allows users to customize the allocation of performance between networking and EBS to suit specific application needs. ### Cost Efficiency and Use Cases * **Licensing Optimization:** In preview testing, customers like Orion reduced SQL Server licensing costs by 50% by maintaining performance thresholds with fewer active cores compared to older instance types. * **Enterprise Applications:** The instances are SAP-certified, making them ideal for RISE with SAP and other high-demand ERP environments. * **Broad Utility:** Beyond databases, the instances are optimized for Electronic Design Automation (EDA) and complex data analytics that require massive memory footprints. For organizations managing massive datasets or expensive licensed database software, migrating to X8i instances offers a clear path to both performance optimization and infrastructure cost reduction. These instances are currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions through On-Demand, Spot, and Reserved purchasing models.

aws

Opening the AWS European Sovereign Cloud | AWS News Blog (opens in new tab)

AWS has officially launched the AWS European Sovereign Cloud, a specialized infrastructure designed to meet the rigorous data residency and operational autonomy requirements of European public sector organizations and highly regulated industries. This new offering provides a fully featured cloud environment that is physically and logically separate from existing AWS Regions, ensuring all data and metadata remain entirely within the European Union. By bridging the gap between legacy on-premises security and modern cloud innovation, AWS enables sensitive workloads to operate under strict European jurisdiction and independent governance. **Strategic Independence and Operational Control** Organizations in the EU often face complex regulatory hurdles that prevent them from using standard public cloud offerings, frequently forcing them to remain on aging on-premises hardware. The AWS European Sovereign Cloud addresses these challenges through: * **Independent Operations:** The infrastructure is operated independently from other AWS Regions, providing a distinct management layer specific to the EU. * **Enhanced Sovereignty Controls:** Robust technical controls and legal protections are integrated to ensure that data remains under European jurisdiction. * **Governance Autonomy:** The cloud is built to provide European entities with full control over their data residency and operational transparency. **Independent Infrastructure and Regional Presence** The architecture is designed for high availability and resilience, ensuring that mission-critical services remain functional regardless of external connectivity. * **Initial Region:** The first region is now generally available in Brandenburg, Germany, serving as the primary hub for the sovereign infrastructure. * **Redundancy:** The infrastructure utilizes multiple Availability Zones with redundant power and networking to maintain continuous operation. * **Isolated Connectivity:** The design allows the cloud to continue operating even if connectivity to the rest of the global AWS network is interrupted. **Expansion and Hybrid Deployment Options** To support the diverse needs of EU member states, AWS is expanding the footprint of this sovereign infrastructure through localized hardware and edge services. * **Sovereign Local Zones:** Future expansion plans include new Local Zones in Belgium, the Netherlands, and Portugal to provide low-latency access within specific borders. * **Hybrid Integration:** Customers can extend sovereign infrastructure to their own data centers using AWS Outposts or AWS Dedicated Local Zones. * **Advanced Capabilities:** The platform supports specialized workloads through AWS AI Factories, allowing regulated industries to leverage artificial intelligence within a sovereign boundary. For European organizations navigating strict compliance landscapes, the AWS European Sovereign Cloud provides a viable path to digital transformation. Decision-makers should evaluate their current on-premises or restricted cloud environments to determine how these new sovereign regions and local zones can fulfill upcoming data residency mandates while providing access to advanced cloud-native services.

aws

AWS Weekly Roundup: AWS Lambda for .NET 10, AWS Client VPN quickstart, Best of AWS re:Invent, and more (January 12, 2026) | AWS News Blog (opens in new tab)

The AWS Weekly Roundup for January 2026 highlights a significant push toward modernization, headlined by the introduction of .NET 10 support for AWS Lambda and Apache Airflow 2.11 for Amazon MWAA. To encourage exploration of these and other emerging technologies, AWS has revamped its Free Tier to offer new users up to $200 in credits and six months of risk-free experimentation. These updates collectively aim to streamline serverless development, enhance container storage efficiency, and provide more robust authentication options for messaging services. ### Modernized Runtimes and Orchestration * AWS Lambda now supports .NET 10 as both a managed runtime and a container base image, with AWS providing automatic updates to these environments as they become available. * Amazon Managed Workflows for Apache Airflow (MWAA) has added support for version 2.11, which serves as a critical stepping stone for users preparing to migrate to Apache Airflow 3. ### Infrastructure and Resource Management * Amazon ECS has extended support for `tmpfs` mounts to Linux tasks running on AWS Fargate and Managed Instances; this allows developers to utilize memory-backed file systems for containerized workloads to avoid writing sensitive or temporary data to task storage. * AWS Config has expanded its monitoring capabilities to discover, assess, and audit new resource types across Amazon EC2, Amazon SageMaker, and Amazon S3 Tables. * A new AWS Client VPN quickstart was released, providing a CloudFormation template and a step-by-step guide to automate the deployment of secure client-to-site VPN connections. ### Security and Messaging Enhancements * Amazon MQ for RabbitMQ brokers now supports HTTP-based authentication, which can be enabled and managed through the broker’s configuration file. * RabbitMQ brokers on Amazon MQ also now support certificate-based authentication using mutual TLS (mTLS) to improve the security posture of messaging applications. ### Educational Initiatives and Community Events * New AWS Free Tier accounts now include a 6-month trial period featuring $200 in credits and access to over 30 always-free services, specifically targeting developers interested in AI/ML and compute experimentation. * AWS published a curated "Best of re:Invent 2025" playlist, featuring high-impact sessions and keynotes for those who missed the live event. * The 2026 AWS Summit season begins shortly, with upcoming events scheduled for Dubai on February 10 and Paris on March 10. Developers should take immediate advantage of the new .NET 10 Lambda runtime for serverless applications and review the updated ECS `tmpfs` documentation to optimize container performance. For those new to the platform, the expanded Free Tier credits provide an excellent opportunity to prototype AI/ML workloads with minimal financial risk.

aws

Happy New Year! AWS Weekly Roundup: 10,000 AIdeas Competition, Amazon EC2, Amazon ECS Managed Instances and more (January 5, 2026) | AWS News Blog (opens in new tab)

The first AWS Weekly Roundup of 2026 highlights a strategic focus on community-driven AI innovation and significant performance upgrades to the EC2 instance lineup. By combining high-stakes competitions like the 10,000 AIdeas challenge with technical releases such as Graviton4-powered instances, AWS is positioning itself to lead in both "Agentic AI" development and high-performance cloud infrastructure. **AI Innovation and Professional Mentorship** * The "Become a Solutions Architect" (BeSA) program is launching a new six-week cohort on February 21, 2026, specifically focused on Agentic AI on AWS. * The Global 10,000 AIdeas Competition offers a $250,000 prize pool and recognition at re:Invent 2026, with a submission deadline of January 21, 2026. * Competition participants are required to utilize the "Kiro" development tool and must ensure their applications remain within AWS Free Tier limits. **Next-Generation EC2 Instances and Hardware** * New M8gn and M8gb instances utilize AWS Graviton4 processors, providing a 30% compute performance boost over the previous Graviton3 generation. * The M8gn variant features 6th generation AWS Nitro Cards, delivering up to 600 Gbps of network bandwidth, the highest available for network-optimized instances. * The M8gb variant is optimized for storage-heavy workloads, offering up to 150 Gbps of dedicated Amazon EBS bandwidth. **Resilience Testing and Governance** * AWS Direct Connect now integrates with the AWS Fault Injection Service (FIS), allowing engineers to simulate Border Gateway Protocol (BGP) failovers to validate redundant pathing. * AWS Control Tower has expanded its governance capabilities by supporting 176 additional Security Hub controls within the Control Catalog. * These controls address a broad spectrum of requirements across security, cost optimization, operations, and data durability. **Hybrid Cloud and Windows Support** * Amazon ECS Managed Instances now support Windows Server for on-premises and remote environment management. * The service uses AWS Systems Manager (SSM) to register external instances, which can then be managed as part of an ECS cluster using Windows-based ECS-optimized AMIs. Developers and infrastructure architects should prioritize the January 21 deadline for AI project submissions while evaluating the M8gn instances for high-throughput networking requirements. Additionally, organizations running hybrid Windows workloads should explore the new ECS Managed Instances support to unify their container orchestration across on-premises and cloud environments.

aws

AWS Weekly Roundup: Amazon ECS, Amazon CloudWatch, Amazon Cognito and more (December 15, 2025) | AWS News Blog (opens in new tab)

The AWS Weekly Roundup for mid-December 2025 highlights a series of updates designed to streamline developer workflows and enhance security across the cloud ecosystem. Following the momentum of re:Invent 2025, these releases focus on reducing operational friction through faster database provisioning, more granular container control, and AI-assisted development tools. These advancements collectively aim to simplify infrastructure management while providing deeper cost visibility and improved performance for enterprise applications. ## Database and Developer Productivity * **Amazon Aurora DSQL** now supports near-instant cluster creation, reducing provisioning time from minutes to seconds to facilitate rapid prototyping and AI-powered development via the Model Context Protocol (MCP) server. * **Amazon Aurora PostgreSQL** has integrated with **Kiro powers**, allowing developers to use AI-assisted coding for schema management and database queries through pre-packaged MCP servers. * **Amazon CloudWatch SDK** introduced support for optimized JSON and CBOR protocols, improving the efficiency of data transmission and processing within the monitoring suite. * **Amazon Cognito** simplified user communications by enabling automated email delivery through Amazon SES using verified identities, removing the need for manual SES configuration. ## Compute and Networking Optimizations * **Amazon ECS on AWS Fargate** now honors custom container stop signals, such as SIGQUIT or SIGINT, allowing for graceful shutdowns of applications that do not use the default SIGTERM instruction. * **Application Load Balancer (ALB)** received performance enhancements that reduce latency for establishing new connections and lower resource consumption during traffic processing. * **AWS Fargate** cost optimization strategies were highlighted in new technical guides, focusing on leveraging Graviton processors and Fargate Spot to maximize compute efficiency. ## Security and Cost Management * **Amazon WorkSpaces Secure Browser** introduced Web Content Filtering, providing category-based access control across 25+ predefined categories and granular URL policies at no additional cost. * **AWS Cost Management** tools now feature **Tag Inheritance**, which automatically applies tags from resources to cost data, allowing for more precise tracking in Cost Explorer and AWS Budgets. * **Amazon Step Functions** integration with Amazon Bedrock was further detailed in community resources, showcasing how to build resilient, long-running AI workflows with integrated error handling. To take full advantage of these updates, organizations should review their Fargate task definitions to implement custom stop signals for better application stability and enable Tag Inheritance to improve the accuracy of year-end cloud financial reporting.

aws

AWS Weekly Roundup: AWS re:Invent keynote recap, on-demand videos, and more (December 8, 2025) | AWS News Blog (opens in new tab)

The December 8, 2025, AWS Weekly Roundup recaps the major themes from AWS re:Invent, signaling a significant industry transition from AI assistants to autonomous AI agents. While technical innovation in infrastructure remains a priority, the event underscored that developers remain at the heart of the AWS mission, empowered by new tools to automate complex tasks using natural language. This shift represents a "renaissance" in cloud computing, where purpose-built infrastructure is now designed to support the non-deterministic nature of agentic workloads. ## Community Recognition and the Now Go Build Award * Raphael Francis Quisumbing (Rafi) from the Philippines was honored with the Now Go Build Award, presented by Werner Vogels. * A veteran of the ecosystem, Quisumbing has served as an AWS Hero since 2015 and has co-led the AWS User Group Philippines for over a decade. * The recognition emphasizes AWS's continued focus on community dedication and the role of individual builders in empowering regional developer ecosystems. ## The Evolution from AI Assistants to Agents * AWS CEO Matt Garman identified AI agents as the next major inflection point for the industry, moving beyond simple chat interfaces to systems that perform tasks and automate workflows. * Dr. Swami Sivasubramanian highlighted a paradigm shift where natural language serves as the primary interface for describing complex goals. * These agents are designed to autonomously generate plans, write necessary code, and call various tools to execute complete solutions without constant human intervention. * AWS is prioritizing the development of production-ready infrastructure that is secure and scalable specifically to handle the "non-deterministic" behavior of these AI agents. ## Core Infrastructure and the Developer Renaissance * Despite the focus on AI, AWS reaffirmed that its core mission remains the "freedom to invent," keeping developers central to its 20-year strategy. * Leaders Peter DeSantis and Dave Brown reinforced that foundational attributes—security, availability, and performance—remain the non-negotiable pillars of the AWS cloud. * The integration of AI agents is framed as a way to finally realize material business returns on AI investments by moving from experimental use cases to automated business logic. To maximize the value of these updates, organizations should begin evaluating how to transition from simple LLM implementations to agentic frameworks that can execute end-to-end business processes. Reviewing the on-demand keynote sessions from re:Invent 2025 is recommended for technical teams looking to implement the latest secure, agent-ready infrastructure.

aws

Amazon Bedrock adds reinforcement fine-tuning simplifying how developers build smarter, more accurate AI models | AWS News Blog (opens in new tab)

Amazon Bedrock has introduced reinforcement fine-tuning, a new model customization capability that allows developers to build more accurate and cost-effective AI models using feedback-driven training. By moving away from the requirement for massive labeled datasets in favor of reward signals, the platform enables average accuracy gains of 66% while automating the complex infrastructure typically associated with advanced machine learning. This approach allows organizations to optimize smaller, faster models for specific business needs without sacrificing performance or incurring the high costs of larger model variants. **Challenges of Traditional Model Customization** * Traditional fine-tuning often requires massive, high-quality labeled datasets and expensive human annotation, which can be a significant barrier for many organizations. * Developers previously had to choose between settle for generic "out-of-the-box" results or managing the high costs and complexity of large-scale infrastructure. * The high barrier to entry for advanced reinforcement learning techniques often required specialized ML expertise that many development teams lack. **Mechanics of Reinforcement Fine-Tuning** * The system uses an iterative feedback loop where models improve based on reward signals that judge the quality of responses against specific business requirements. * Reinforcement Learning with Verifiable Rewards (RLVR) utilizes rule-based graders to provide objective feedback for tasks such as mathematics or code generation. * Reinforcement Learning from AI Feedback (RLAIF) uses AI-driven evaluations to help models understand preference and quality without manual human intervention. * The workflow can be powered by existing API logs within Amazon Bedrock or by uploading training datasets, eliminating the need for complex infrastructure setup. **Performance and Security Advantages** * The technique achieves an average accuracy improvement of 66% over base models, enabling smaller models to perform at the level of much larger alternatives. * Current support includes the Amazon Nova 2 Lite model, which helps developers optimize for both speed and price-to-performance. * All training data and customization processes remain within the secure AWS environment, ensuring that proprietary data is protected and compliant with organizational security standards. Developers should consider reinforcement fine-tuning as a primary strategy for optimizing smaller models like Amazon Nova 2 Lite to achieve high-tier performance at a lower cost. This capability is particularly recommended for specialized tasks like reasoning and coding where objective reward functions can be used to rapidly iterate and improve model accuracy.

aws

New serverless customization in Amazon SageMaker AI accelerates model fine-tuning | AWS News Blog (opens in new tab)

Amazon SageMaker AI has introduced a new serverless customization capability designed to accelerate the fine-tuning of popular models like Llama, DeepSeek, and Amazon Nova. By automating resource provisioning and providing an intuitive interface for advanced reinforcement learning techniques, this feature reduces the model customization lifecycle from months to days. This end-to-end workflow allows developers to focus on model performance rather than infrastructure management, from initial training through to final deployment. **Automated Infrastructure and Model Support** * The service provides a serverless environment where SageMaker AI automatically selects and provisions compute resources based on the specific model architecture and dataset size. * Supported models include a broad range of high-performance options such as Amazon Nova, DeepSeek, GPT-OSS, Meta Llama, and Qwen. * The feature is accessible directly through the Amazon SageMaker Studio interface, allowing users to manage their entire model catalog in one location. **Advanced Customization and Reinforcement Learning** * Users can choose from several fine-tuning techniques, including traditional Supervised Fine-Tuning (SFT) and more advanced methods. * The platform supports modern optimization techniques such as Direct Preference Optimization (DPO), Reinforcement Learning from Verifiable Rewards (RLVR), and Reinforcement Learning from AI Feedback (RLAIF). * To simplify the process, SageMaker AI provides recommended defaults for hyperparameters like batch size, learning rate, and epochs based on the selected tuning technique. **Experiment Tracking and Security** * The workflow introduces a serverless MLflow application, enabling seamless experiment tracking and performance monitoring without additional setup. * Advanced configuration options allow for fine-grained control over network encryption and storage volume encryption to ensure data security. * The "Continue customization" feature allows for iterative tuning, where users can adjust hyperparameters or apply different techniques to an existing customized model. **Evaluation and Deployment Flexibility** * Built-in evaluation tools allow developers to compare the performance of their customized models against the original base models to verify improvements. * Once a model is finalized, it can be deployed with a few clicks to either Amazon SageMaker or Amazon Bedrock. * A centralized "My Models" dashboard tracks all custom iterations, providing detailed logs and status updates for every training and evaluation job. This serverless approach is highly recommended for teams that need to adapt large language models to specific domains quickly without the operational overhead of managing GPU clusters. By utilizing the integrated evaluation and multi-platform deployment options, organizations can transition from experimentation to production-ready AI more efficiently.

aws

Introducing checkpointless and elastic training on Amazon SageMaker HyperPod | AWS News Blog (opens in new tab)

Amazon SageMaker HyperPod has introduced checkpointless and elastic training features to accelerate AI model development by minimizing infrastructure-related downtime. These advancements replace traditional, slow checkpoint-restart cycles with peer-to-peer state recovery and enable training workloads to scale dynamically based on available compute capacity. By decoupling training progress from static hardware configurations, organizations can significantly reduce model time-to-market while maximizing cluster utilization. **Checkpointless Training and Rapid State Recovery** * Replaces the traditional five-stage recovery process—including job termination, network setup, and checkpoint retrieval—which can often take up to an hour on self-managed clusters. * Utilizes peer-to-peer state replication and in-process recovery to allow healthy nodes to restore the model state instantly without restarting the entire job. * Incorporates technical optimizations such as collective communications initialization and memory-mapped data loading to enable efficient data caching. * Reduces recovery downtime by over 80% based on internal studies of clusters with up to 2,000 GPUs, and was a core technology used in the development of Amazon Nova models. **Elastic Training and Automated Cluster Scaling** * Allows AI workloads to automatically expand to use idle cluster capacity as it becomes available and contract when resources are needed for higher-priority tasks. * Reduces the need for manual intervention, saving hours of engineering time previously spent reconfiguring training jobs to match fluctuating compute availability. * Optimizes total cost of ownership by ensuring that training momentum continues even as inference volumes peak and pull resources away from the training pool. * Orchestrates these transitions seamlessly through the HyperPod training operator, ensuring that model development is not disrupted by infrastructure changes. For teams managing large-scale AI workloads, adopting these features can reclaim significant development time and lower operational costs by preventing idle cluster periods. Organizations scaling to thousands of accelerators should prioritize checkpointless training to mitigate the impact of hardware faults and maintain continuous training momentum.

aws

Announcing replication support and Intelligent-Tiering for Amazon S3 Tables | AWS News Blog (opens in new tab)

AWS has expanded the capabilities of Amazon S3 Tables by introducing Intelligent-Tiering for automated cost optimization and cross-region replication for enhanced data availability. These updates address the operational overhead of managing large-scale Apache Iceberg datasets by automating storage lifecycle management and simplifying the architecture required for global data distribution. By integrating these features, organizations can reduce storage costs without manual intervention while ensuring consistent data access across multiple AWS Regions and accounts. ### Cost Optimization with S3 Tables Intelligent-Tiering This feature automatically shifts data between storage tiers based on access frequency to maximize cost efficiency without impacting application performance. * The system utilizes three low-latency tiers: Frequent Access, Infrequent Access (offering 40% lower costs), and Archive Instant Access (offering 68% lower costs than Infrequent Access). * Data transitions are automated, moving to Infrequent Access after 30 days of inactivity and to Archive Instant Access after 90 days. * Automated table maintenance tasks, such as compaction and snapshot expiration, are optimized to skip colder files; for example, compaction only processes data in the Frequent Access tier to minimize unnecessary compute and storage costs. * Users can configure Intelligent-Tiering as the default storage class at the table bucket level using the AWS CLI commands `put-table-bucket-storage-class` and `get-table-bucket-storage-class`. ### Cross-Region and Cross-Account Replication New replication support allows users to maintain synchronized, read-only replicas of their S3 Tables across different geographic locations and ownership boundaries. * Replication maintains chronological consistency and preserves parent-child snapshot relationships, ensuring that replicas remain identical to the source for query purposes. * Replica tables are typically updated within minutes of changes to the source table and support independent encryption and retention policies to meet specific regional compliance requirements. * The service eliminates the need for complex, custom-built architectures to track metadata transformations or manually sync objects between Iceberg tables. * This functionality is primarily designed to reduce query latency for geographically distributed teams and provide robust data protection for disaster recovery scenarios. ### Practical Implementation To maximize the benefits of these new features, organizations should consider setting Intelligent-Tiering as the default storage class at the bucket level for all new datasets to ensure immediate cost savings. For global operations, setting up read-only replicas in regions closest to end-users will significantly improve query performance for analytics tools like Amazon Athena and Amazon SageMaker.

aws

Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables | AWS News Blog (opens in new tab)

Amazon S3 Storage Lens has introduced three significant updates designed to provide deeper visibility into storage performance and usage patterns at scale. By adding dedicated performance metrics, support for billions of prefixes, and direct export capabilities to Amazon S3 Tables, AWS enables organizations to better optimize application latency and storage costs. These enhancements allow for more granular data-driven decisions across entire AWS organizations or specific high-performance workloads. ## Enhanced Performance Metric Categories The update introduces eight new performance-related metric categories available through the S3 Storage Lens advanced tier. These metrics are designed to pinpoint specific architectural bottlenecks that could impact application speed. * **Request and Storage Distributions:** New metrics track the distribution of read/write request sizes and object sizes, helping identify small-object patterns that might be better suited for Amazon S3 Express One Zone. * **Error and Latency Tracking:** Users can now monitor concurrent PUT 503 errors to identify throttling and analyze FirstByteLatency and TotalRequestLatency to measure end-to-end request performance. * **Data Transfer Efficiency:** Metrics for cross-Region data transfer help identify high-cost or high-latency data access patterns, suggesting where compute resources should be co-located with storage. * **Access Patterns:** Tracking unique objects accessed per day identifies "hot" datasets that could benefit from higher-performance storage tiers or caching solutions. ## Support for Billions of Prefixes S3 Storage Lens has expanded its analytical scale to support the monitoring of billions of prefixes. This allows organizations with massive, complex data structures to maintain granular visibility without sacrificing performance or detail. * **Granular Visibility:** Users can drill down into massive datasets to find specific prefixes causing performance degradation or cost spikes. * **Scalable Analysis:** This expansion ensures that even the largest data lakes can be monitored at a level of detail previously limited to smaller buckets. ## Integration with Amazon S3 Tables The service now supports direct export of storage metrics to Amazon S3 Tables, a feature optimized for high-performance analytics. This integration streamlines the workflow for administrators who need to perform complex queries on their storage metadata. * **Analytical Readiness:** Exporting to S3 Tables makes it easier to use SQL-based tools to query storage trends and performance over time. * **Automation:** This capability allows for the creation of automated reporting pipelines that can handle the massive volume of data generated by prefix-level monitoring. To take full advantage of these features, users should enable the S3 Storage Lens advanced tier and configure prefix-level monitoring for buckets containing mission-critical or high-throughput data. Organizations experiencing latency issues should specifically review the new request size distribution metrics to determine if batching objects or migrating to S3 Express One Zone would improve performance.

aws

Amazon Bedrock AgentCore adds quality evaluations and policy controls for deploying trusted AI agents | AWS News Blog (opens in new tab)

AWS has introduced several new capabilities to Amazon Bedrock AgentCore designed to remove the trust and quality barriers that often prevent AI agents from moving into production environments. These updates, which include granular policy controls and sophisticated evaluation tools, allow developers to implement strict operational boundaries and monitor real-world performance at scale. By balancing agent autonomy with centralized verification, AgentCore provides a secure framework for deploying highly capable agents across enterprise workflows. **Governance through Policy in AgentCore** * This feature establishes clear boundaries for agent actions by intercepting tool calls via the AgentCore Gateway before they are executed. * By operating outside of the agent’s internal reasoning loop, the policy layer acts as an independent verification system that treats the agent as an autonomous actor requiring permission. * Developers can define fine-grained permissions to ensure agents do not access sensitive data inappropriately or take unauthorized actions within external systems. **Quality Monitoring with AgentCore Evaluations** * The new evaluation framework allows teams to monitor the quality of AI agents based on actual behavior rather than theoretical simulations. * Built-in evaluators provide standardized metrics for critical dimensions such as helpfulness and correctness. * Organizations can also implement custom evaluators to ensure agents meet specific business-logic requirements and industry-specific compliance standards. **Enhanced Memory and Communication Features** * New episodic functionality in AgentCore Memory introduces a long-term strategy that allows agents to learn from past experiences and apply successful solutions to similar future tasks. * Bidirectional streaming in the AgentCore Runtime supports the deployment of advanced voice agents capable of handling natural, simultaneous conversation flows. * These enhancements focus on improving consistency and user experience, enabling agents to handle complex, multi-turn interactions with higher reliability. **Real-World Application and Performance** * The AgentCore SDK has seen rapid adoption with over 2 million downloads, supporting diverse use cases from content generation at the PGA TOUR to financial data analysis at Workday. * Case studies highlight significant operational gains, such as a 1,000 percent increase in content writing speed and a 50 percent reduction in problem resolution time through improved observability. * The platform emphasizes 100 percent traceability of agent decisions, which is critical for organizations transitioning from reactive to proactive AI-driven operations. To successfully scale AI agents, organizations should transition from simple prompt engineering to a robust agentic architecture. Leveraging these new policy and evaluation tools will allow development teams to maintain the necessary control and visibility required for customer-facing and mission-critical deployments.

aws

Build multi-step applications and AI workflows with AWS Lambda durable functions | AWS News Blog (opens in new tab)

AWS Lambda durable functions introduce a simplified way to manage complex, long-running workflows directly within the standard Lambda experience. By utilizing a checkpoint and replay mechanism, developers can now write sequential code for multi-step processes that automatically handle state management and retries without the need for external orchestration services. This feature significantly reduces the cost of long-running tasks by allowing functions to suspend execution for up to one year without incurring compute charges during idle periods. ### Durable Execution Mechanism * The system uses a "durable execution" model based on checkpointing and replay to maintain state across function restarts. * When a function is interrupted or resumes from a pause, Lambda re-executes the handler from the beginning but skips already-completed operations by referencing saved checkpoints. * This architecture ensures that business logic remains resilient to failures and can survive execution environment recycles. * The execution state can be maintained for extended periods, supporting workflows that require human intervention or long-duration external processes. ### Programming Primitives and SDK * The feature requires the inclusion of a new open-source durable execution SDK in the function code. * **Steps:** The `context.step()` method defines specific blocks of logic that the system checkpoints and automatically retries upon failure. * **Wait:** The `context.wait()` primitive allows the function to terminate and release compute resources while waiting for a specified duration, resuming only when the time elapses. * **Callbacks:** Developers can use `create_callback()` to pause execution until an external event, such as an API response or a manual approval, is received. * **Advanced Control:** The SDK includes `wait_for_condition()` for polling external statuses and `parallel()` or `map()` operations for managing concurrent execution paths. ### Configuration and Setup * Durable execution must be enabled at the time of the Lambda function's creation; it cannot be retroactively enabled for existing functions. * Once enabled, the function maintains the same event handler structure and service integrations as a standard Lambda function. * The environment is specifically optimized for high-reliability use cases like payment processing, AI agent orchestration, and complex order management. AWS Lambda durable functions represent a major shift for developers who need the power of stateful orchestration but prefer to keep their logic within a single code-based environment. It is highly recommended for building AI workflows and multi-step business processes where state persistence and cost-efficiency are critical requirements.