AWS Weekly Roundup: Amazon Connect Health, Bedrock AgentCore Policy, GameDay Europe, and more (March 9, 2026) Fiti AWS Student Community Kenya! Last week was an incredible whirlwind: a round of meetups, hands-on workshops, and career discussions across Kenya that culminated with…
AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026) Last week, my team met many developers at Developer Week in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent softwa…
Amazon EC2 Hpc8a Instances powered by 5th Gen AMD EPYC processors are now available Today, we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc8a instances, a new high performance computing (HPC) optimized instance type powered by latest 5th…
Announcing Amazon SageMaker Inference for custom Amazon Nova models Since we launched Amazon Nova customization in Amazon SageMaker AI at AWS NY Summit 2025, customers have been asking for the same capabilities with Amazon Nova as they do when they customize open weights models…
AWS Weekly Roundup: Amazon EC2 M8azn instances, new open weights models in Amazon Bedrock, and more (February 16, 2026) I joined AWS in 2021, and since then I’ve watched the Amazon Elastic Compute Cloud (Amazon EC2) instance family grow at a pace that still surprises me. From AW…
AWS Weekly Roundup: Claude Opus 4.6 in Amazon Bedrock, AWS Builder ID Sign in with Apple, and more (February 9, 2026) Here are the notable launches and updates from last week that can help you build, scale, and innovate on AWS. Last week’s launches Here are the launches that got…
Amazon EC2 C8id, M8id, and R8id instances with up to 22.8 TB local NVMe storage are generally available Last year, we launched the Amazon Elastic Compute Cloud (Amazon EC2) C8i instances, M8i instances, and R8i instances powered by custom Intel Xeon 6 processors available only o…
AWS Weekly Roundup: Amazon EC2 G7e instances, Amazon Corretto updates, and more (January 26, 2026) Hey! It’s my first post for 2026, and I’m writing to you while watching our driveway getting dug out. I hope wherever you are you are safe and warm and your data is still flowing!…
Amazon has announced the general availability of EC2 G7e instances, a new hardware tier powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs designed for generative AI and high-end graphics. These instances deliver up to 2.3 times the inference performance of their G6e predecessors while providing significant upgrades to memory and bandwidth. This launch aims to provide a cost-effective solution for running medium-sized AI models and complex spatial computing workloads at scale.
**Blackwell GPU and Memory Advancements**
* The G7e instances feature NVIDIA RTX PRO 6000 Blackwell GPUs, which provide twice the memory and 1.85 times the memory bandwidth of the G6e generation.
* Each GPU provides 96 GB of memory, allowing users to run medium-sized models—such as those with up to 70 billion parameters—on a single GPU using FP8 precision.
* The architecture is optimized for both spatial computing and scientific workloads, offering the highest graphics performance currently available in the EC2 portfolio.
**High-Speed Connectivity and Multi-GPU Scaling**
* To support large-scale models, G7e instances utilize NVIDIA GPUDirect P2P, enabling direct communication between GPUs over PCIe interconnects with minimal latency.
* These instances offer four times the inter-GPU bandwidth compared to the L40s GPUs found in G6e instances, facilitating more efficient data transfer in multi-GPU configurations.
* Total GPU memory can scale up to 768 GB within a single node, supporting massive inference tasks across eight interconnected GPUs.
**Networking and Storage Performance**
* G7e instances provide up to 1,600 Gbps of network bandwidth, a four-fold increase over previous generations, making them suitable for small-scale multi-node clusters.
* Support for NVIDIA GPUDirect Remote Direct Memory Access (RDMA) via Elastic Fabric Adapter (EFA) reduces latency for remote GPU-to-GPU communication.
* The instances support GPUDirect Storage with Amazon FSx for Lustre, achieving throughput speeds up to 1.2 Tbps to ensure rapid model loading and data processing.
**System Specifications and Configurations**
* Under the hood, G7e instances are powered by Intel Emerald Rapids processors and support up to 192 vCPUs and 2,048 GiB of system memory.
* Local storage options include up to 15.2 TB of NVMe SSD capacity to handle high-speed data caching and local processing.
* The instance family ranges from the g7e.2xlarge (1 GPU, 8 vCPUs) to the g7e.48xlarge (8 GPUs, 192 vCPUs).
For developers ready to transition to Blackwell-based architecture, these instances are accessible through AWS Deep Learning AMIs (DLAMI). They represent a major step forward for organizations needing to balance the high memory requirements of modern LLMs with the cost efficiencies of the G-series instance family.
The January 19, 2026, AWS Weekly Roundup highlights significant advancements in sovereign cloud infrastructure and the general availability of high-performance, memory-optimized compute instances. The update also emphasizes the maturing ecosystem of AI agents, focusing on enhanced developer tooling and streamlined deployment workflows for agentic applications. These releases collectively aim to satisfy stringent regulatory requirements in Europe while pushing the boundaries of enterprise performance and automated productivity.
## Developer Tooling and Kiro CLI Enhancements
* New granular controls for web fetch URLs allow developers to use allowlists and blocklists to strictly govern which external resources an agent can access.
* The update introduces custom keyboard shortcuts to facilitate seamless switching between multiple specialized agents within a single session.
* Enhanced diff views provide clearer visibility into changes, improving the debugging and auditing process for automated workflows.
## AWS European Sovereign Cloud General Availability
* Following its initial 2023 announcement, this independent cloud infrastructure is now generally available to all customers.
* The environment is purpose-built to meet the most rigorous sovereignty and data residency requirements for European organizations.
* It offers a comprehensive set of AWS services within a framework that ensures operational independence and localized data handling.
## High-Performance Computing with EC2 X8i Instances
* The memory-optimized X8i instances, powered by custom Intel Xeon 6 processors, have moved from preview to general availability.
* These instances feature a sustained all-core turbo frequency of 3.9 GHz, which is currently exclusive to the AWS platform.
* The hardware is SAP certified and engineered to provide the highest memory bandwidth and performance for memory-intensive enterprise workloads compared to other Intel-based cloud offerings.
## Agentic AI and Productivity Updates
* Amazon Quick Suite continues to expand as a workplace "agentic teammate," designed to synthesize research and execute actions based on organizational insights.
* New technical guidance has been released regarding the deployment of AI agents on Amazon Bedrock AgentCore.
* The integration of GitHub Actions is now supported to automate the deployment and lifecycle management of these AI agents, bridging the gap between traditional DevOps and agentic AI development.
These updates signal a strategic shift toward highly specialized infrastructure, both in terms of regulatory compliance with the Sovereign Cloud and raw performance with the X8i instances. Organizations looking to scale their AI operations should prioritize the new deployment patterns for Bedrock AgentCore to ensure a robust CI/CD pipeline for their autonomous agents.
Amazon has announced the general availability of EC2 X8i instances, specifically engineered for memory-intensive workloads such as SAP HANA, large-scale databases, and data analytics. Powered by custom Intel Xeon 6 processors with a 3.9 GHz all-core turbo frequency, these instances provide a significant performance leap over the previous X2i generation. By offering up to 6 TB of memory and substantial improvements in throughput, X8i instances represent the highest-performing Intel-based memory-optimized option in the AWS cloud.
### Performance Enhancements and Processor Architecture
* **Custom Silicon:** The instances utilize custom Intel Xeon 6 processors available exclusively on AWS, delivering the fastest memory bandwidth among comparable Intel cloud processors.
* **Memory and Bandwidth:** X8i provides 1.5 times more memory capacity (up to 6 TB) and 3.4 times more memory bandwidth compared to previous-generation X2i instances.
* **Workload Benchmarks:** Real-world performance gains include a 50% increase in SAP Application Performance Standard (SAPS), 47% faster PostgreSQL performance, 88% faster Memcached performance, and a 46% boost in AI inference.
### Scalable Instance Sizes and Throughput
* **Flexible Sizing:** The instances are available in 14 sizes, including new larger formats such as the 48xlarge, 64xlarge, and 96xlarge.
* **Bare Metal Options:** Two bare metal sizes (metal-48xl and metal-96xl) are available for workloads requiring direct access to physical hardware resources.
* **Networking and Storage:** The architecture supports up to 100 Gbps of network bandwidth with Elastic Fabric Adapter (EFA) support and up to 80 Gbps of Amazon EBS throughput.
* **Bandwidth Control:** Support for Instance Bandwidth Configuration (IBC) allows users to customize the allocation of performance between networking and EBS to suit specific application needs.
### Cost Efficiency and Use Cases
* **Licensing Optimization:** In preview testing, customers like Orion reduced SQL Server licensing costs by 50% by maintaining performance thresholds with fewer active cores compared to older instance types.
* **Enterprise Applications:** The instances are SAP-certified, making them ideal for RISE with SAP and other high-demand ERP environments.
* **Broad Utility:** Beyond databases, the instances are optimized for Electronic Design Automation (EDA) and complex data analytics that require massive memory footprints.
For organizations managing massive datasets or expensive licensed database software, migrating to X8i instances offers a clear path to both performance optimization and infrastructure cost reduction. These instances are currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions through On-Demand, Spot, and Reserved purchasing models.
The first AWS Weekly Roundup of 2026 highlights a strategic focus on community-driven AI innovation and significant performance upgrades to the EC2 instance lineup. By combining high-stakes competitions like the 10,000 AIdeas challenge with technical releases such as Graviton4-powered instances, AWS is positioning itself to lead in both "Agentic AI" development and high-performance cloud infrastructure.
**AI Innovation and Professional Mentorship**
* The "Become a Solutions Architect" (BeSA) program is launching a new six-week cohort on February 21, 2026, specifically focused on Agentic AI on AWS.
* The Global 10,000 AIdeas Competition offers a $250,000 prize pool and recognition at re:Invent 2026, with a submission deadline of January 21, 2026.
* Competition participants are required to utilize the "Kiro" development tool and must ensure their applications remain within AWS Free Tier limits.
**Next-Generation EC2 Instances and Hardware**
* New M8gn and M8gb instances utilize AWS Graviton4 processors, providing a 30% compute performance boost over the previous Graviton3 generation.
* The M8gn variant features 6th generation AWS Nitro Cards, delivering up to 600 Gbps of network bandwidth, the highest available for network-optimized instances.
* The M8gb variant is optimized for storage-heavy workloads, offering up to 150 Gbps of dedicated Amazon EBS bandwidth.
**Resilience Testing and Governance**
* AWS Direct Connect now integrates with the AWS Fault Injection Service (FIS), allowing engineers to simulate Border Gateway Protocol (BGP) failovers to validate redundant pathing.
* AWS Control Tower has expanded its governance capabilities by supporting 176 additional Security Hub controls within the Control Catalog.
* These controls address a broad spectrum of requirements across security, cost optimization, operations, and data durability.
**Hybrid Cloud and Windows Support**
* Amazon ECS Managed Instances now support Windows Server for on-premises and remote environment management.
* The service uses AWS Systems Manager (SSM) to register external instances, which can then be managed as part of an ECS cluster using Windows-based ECS-optimized AMIs.
Developers and infrastructure architects should prioritize the January 21 deadline for AI project submissions while evaluating the M8gn instances for high-throughput networking requirements. Additionally, organizations running hybrid Windows workloads should explore the new ECS Managed Instances support to unify their container orchestration across on-premises and cloud environments.